00:00:00.002 Started by upstream project "autotest-per-patch" build number 126237 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.052 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.053 The recommended git tool is: git 00:00:00.053 using credential 00000000-0000-0000-0000-000000000002 00:00:00.055 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.071 Fetching changes from the remote Git repository 00:00:00.077 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.101 Using shallow fetch with depth 1 00:00:00.101 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.101 > git --version # timeout=10 00:00:00.139 > git --version # 'git version 2.39.2' 00:00:00.139 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.168 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.168 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.257 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.267 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.278 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:03.278 > git config core.sparsecheckout # timeout=10 00:00:03.288 > git read-tree -mu HEAD # timeout=10 00:00:03.302 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:03.323 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:03.323 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:03.413 [Pipeline] Start of Pipeline 00:00:03.427 [Pipeline] library 00:00:03.429 Loading library shm_lib@master 00:00:03.429 Library shm_lib@master is cached. Copying from home. 00:00:03.445 [Pipeline] node 00:00:03.452 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:03.453 [Pipeline] { 00:00:03.464 [Pipeline] catchError 00:00:03.465 [Pipeline] { 00:00:03.477 [Pipeline] wrap 00:00:03.483 [Pipeline] { 00:00:03.489 [Pipeline] stage 00:00:03.490 [Pipeline] { (Prologue) 00:00:03.510 [Pipeline] echo 00:00:03.512 Node: VM-host-SM9 00:00:03.517 [Pipeline] cleanWs 00:00:03.524 [WS-CLEANUP] Deleting project workspace... 00:00:03.524 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.530 [WS-CLEANUP] done 00:00:03.757 [Pipeline] setCustomBuildProperty 00:00:03.828 [Pipeline] httpRequest 00:00:03.846 [Pipeline] echo 00:00:03.848 Sorcerer 10.211.164.101 is alive 00:00:03.855 [Pipeline] httpRequest 00:00:03.859 HttpMethod: GET 00:00:03.859 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:03.860 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:03.860 Response Code: HTTP/1.1 200 OK 00:00:03.861 Success: Status code 200 is in the accepted range: 200,404 00:00:03.861 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.269 [Pipeline] sh 00:00:04.540 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.552 [Pipeline] httpRequest 00:00:04.570 [Pipeline] echo 00:00:04.572 Sorcerer 10.211.164.101 is alive 00:00:04.579 [Pipeline] httpRequest 00:00:04.583 HttpMethod: GET 00:00:04.583 URL: http://10.211.164.101/packages/spdk_b26ca8289b58648c0816f83720e3b904274a249c.tar.gz 00:00:04.584 Sending request to url: http://10.211.164.101/packages/spdk_b26ca8289b58648c0816f83720e3b904274a249c.tar.gz 00:00:04.584 Response Code: HTTP/1.1 200 OK 00:00:04.584 Success: Status code 200 is in the accepted range: 200,404 00:00:04.584 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_b26ca8289b58648c0816f83720e3b904274a249c.tar.gz 00:00:29.063 [Pipeline] sh 00:00:29.342 + tar --no-same-owner -xf spdk_b26ca8289b58648c0816f83720e3b904274a249c.tar.gz 00:00:32.642 [Pipeline] sh 00:00:32.919 + git -C spdk log --oneline -n5 00:00:32.919 b26ca8289 event: add enforce_numa app option 00:00:32.919 83c8cffdc env: add enforce_numa environment option 00:00:32.919 804b11b4b env_dpdk: assert that SOCKET_ID_ANY == SPDK_ENV_SOCKET_ID_ANY 00:00:32.919 cdc37ee83 env_dpdk: deprecate spdk_env_opts_init and spdk_env_init 00:00:32.919 24018edd4 all: replace spdk_env_opts_init/spdk_env_init with _ext variant 00:00:32.940 [Pipeline] writeFile 00:00:32.957 [Pipeline] sh 00:00:33.235 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:33.246 [Pipeline] sh 00:00:33.519 + cat autorun-spdk.conf 00:00:33.519 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.519 SPDK_TEST_NVMF=1 00:00:33.519 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:33.519 SPDK_TEST_USDT=1 00:00:33.519 SPDK_TEST_NVMF_MDNS=1 00:00:33.519 SPDK_RUN_UBSAN=1 00:00:33.519 NET_TYPE=virt 00:00:33.519 SPDK_JSONRPC_GO_CLIENT=1 00:00:33.519 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:33.524 RUN_NIGHTLY=0 00:00:33.530 [Pipeline] } 00:00:33.552 [Pipeline] // stage 00:00:33.568 [Pipeline] stage 00:00:33.570 [Pipeline] { (Run VM) 00:00:33.585 [Pipeline] sh 00:00:33.862 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:33.862 + echo 'Start stage prepare_nvme.sh' 00:00:33.862 Start stage prepare_nvme.sh 00:00:33.862 + [[ -n 5 ]] 00:00:33.862 + disk_prefix=ex5 00:00:33.862 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:00:33.862 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:00:33.862 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:00:33.862 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.862 ++ SPDK_TEST_NVMF=1 00:00:33.862 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:33.862 ++ SPDK_TEST_USDT=1 00:00:33.862 ++ SPDK_TEST_NVMF_MDNS=1 00:00:33.862 ++ SPDK_RUN_UBSAN=1 00:00:33.862 ++ NET_TYPE=virt 00:00:33.862 ++ SPDK_JSONRPC_GO_CLIENT=1 00:00:33.862 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:33.862 ++ RUN_NIGHTLY=0 00:00:33.862 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:33.862 + nvme_files=() 00:00:33.862 + declare -A nvme_files 00:00:33.862 + backend_dir=/var/lib/libvirt/images/backends 00:00:33.862 + nvme_files['nvme.img']=5G 00:00:33.862 + nvme_files['nvme-cmb.img']=5G 00:00:33.862 + nvme_files['nvme-multi0.img']=4G 00:00:33.862 + nvme_files['nvme-multi1.img']=4G 00:00:33.862 + nvme_files['nvme-multi2.img']=4G 00:00:33.862 + nvme_files['nvme-openstack.img']=8G 00:00:33.863 + nvme_files['nvme-zns.img']=5G 00:00:33.863 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:33.863 + (( SPDK_TEST_FTL == 1 )) 00:00:33.863 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:33.863 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:33.863 + for nvme in "${!nvme_files[@]}" 00:00:33.863 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:33.863 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.863 + for nvme in "${!nvme_files[@]}" 00:00:33.863 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:33.863 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.863 + for nvme in "${!nvme_files[@]}" 00:00:33.863 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:33.863 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:33.863 + for nvme in "${!nvme_files[@]}" 00:00:33.863 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:33.863 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.863 + for nvme in "${!nvme_files[@]}" 00:00:33.863 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:33.863 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.863 + for nvme in "${!nvme_files[@]}" 00:00:33.863 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:33.863 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.863 + for nvme in "${!nvme_files[@]}" 00:00:33.863 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:34.121 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:34.121 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:34.121 + echo 'End stage prepare_nvme.sh' 00:00:34.121 End stage prepare_nvme.sh 00:00:34.131 [Pipeline] sh 00:00:34.407 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:34.407 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora38 00:00:34.407 00:00:34.407 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:00:34.407 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:00:34.407 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:34.407 HELP=0 00:00:34.407 DRY_RUN=0 00:00:34.407 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:34.407 NVME_DISKS_TYPE=nvme,nvme, 00:00:34.407 NVME_AUTO_CREATE=0 00:00:34.407 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:34.407 NVME_CMB=,, 00:00:34.407 NVME_PMR=,, 00:00:34.407 NVME_ZNS=,, 00:00:34.407 NVME_MS=,, 00:00:34.407 NVME_FDP=,, 00:00:34.407 SPDK_VAGRANT_DISTRO=fedora38 00:00:34.407 SPDK_VAGRANT_VMCPU=10 00:00:34.407 SPDK_VAGRANT_VMRAM=12288 00:00:34.407 SPDK_VAGRANT_PROVIDER=libvirt 00:00:34.407 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:34.407 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:34.407 SPDK_OPENSTACK_NETWORK=0 00:00:34.407 VAGRANT_PACKAGE_BOX=0 00:00:34.407 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:34.407 FORCE_DISTRO=true 00:00:34.407 VAGRANT_BOX_VERSION= 00:00:34.407 EXTRA_VAGRANTFILES= 00:00:34.407 NIC_MODEL=e1000 00:00:34.407 00:00:34.407 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt' 00:00:34.407 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:37.687 Bringing machine 'default' up with 'libvirt' provider... 00:00:38.621 ==> default: Creating image (snapshot of base box volume). 00:00:38.621 ==> default: Creating domain with the following settings... 00:00:38.621 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721070988_cc57feba3bafae677429 00:00:38.621 ==> default: -- Domain type: kvm 00:00:38.621 ==> default: -- Cpus: 10 00:00:38.621 ==> default: -- Feature: acpi 00:00:38.621 ==> default: -- Feature: apic 00:00:38.621 ==> default: -- Feature: pae 00:00:38.621 ==> default: -- Memory: 12288M 00:00:38.621 ==> default: -- Memory Backing: hugepages: 00:00:38.621 ==> default: -- Management MAC: 00:00:38.621 ==> default: -- Loader: 00:00:38.621 ==> default: -- Nvram: 00:00:38.621 ==> default: -- Base box: spdk/fedora38 00:00:38.621 ==> default: -- Storage pool: default 00:00:38.621 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721070988_cc57feba3bafae677429.img (20G) 00:00:38.621 ==> default: -- Volume Cache: default 00:00:38.621 ==> default: -- Kernel: 00:00:38.621 ==> default: -- Initrd: 00:00:38.621 ==> default: -- Graphics Type: vnc 00:00:38.621 ==> default: -- Graphics Port: -1 00:00:38.621 ==> default: -- Graphics IP: 127.0.0.1 00:00:38.621 ==> default: -- Graphics Password: Not defined 00:00:38.621 ==> default: -- Video Type: cirrus 00:00:38.621 ==> default: -- Video VRAM: 9216 00:00:38.622 ==> default: -- Sound Type: 00:00:38.622 ==> default: -- Keymap: en-us 00:00:38.622 ==> default: -- TPM Path: 00:00:38.622 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:38.622 ==> default: -- Command line args: 00:00:38.622 ==> default: -> value=-device, 00:00:38.622 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:38.622 ==> default: -> value=-drive, 00:00:38.622 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:38.622 ==> default: -> value=-device, 00:00:38.622 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:38.622 ==> default: -> value=-device, 00:00:38.622 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:38.622 ==> default: -> value=-drive, 00:00:38.622 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:38.622 ==> default: -> value=-device, 00:00:38.622 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:38.622 ==> default: -> value=-drive, 00:00:38.622 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:38.622 ==> default: -> value=-device, 00:00:38.622 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:38.622 ==> default: -> value=-drive, 00:00:38.622 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:38.622 ==> default: -> value=-device, 00:00:38.622 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:38.622 ==> default: Creating shared folders metadata... 00:00:38.622 ==> default: Starting domain. 00:00:39.999 ==> default: Waiting for domain to get an IP address... 00:00:58.087 ==> default: Waiting for SSH to become available... 00:00:58.087 ==> default: Configuring and enabling network interfaces... 00:01:01.383 default: SSH address: 192.168.121.89:22 00:01:01.383 default: SSH username: vagrant 00:01:01.383 default: SSH auth method: private key 00:01:03.916 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:12.020 ==> default: Mounting SSHFS shared folder... 00:01:12.981 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:12.981 ==> default: Checking Mount.. 00:01:14.356 ==> default: Folder Successfully Mounted! 00:01:14.356 ==> default: Running provisioner: file... 00:01:14.922 default: ~/.gitconfig => .gitconfig 00:01:15.489 00:01:15.489 SUCCESS! 00:01:15.489 00:01:15.489 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:01:15.489 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:15.489 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:01:15.489 00:01:15.500 [Pipeline] } 00:01:15.517 [Pipeline] // stage 00:01:15.526 [Pipeline] dir 00:01:15.526 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt 00:01:15.528 [Pipeline] { 00:01:15.540 [Pipeline] catchError 00:01:15.542 [Pipeline] { 00:01:15.557 [Pipeline] sh 00:01:15.837 + vagrant ssh-config --host vagrant 00:01:15.837 + sed -ne /^Host/,$p 00:01:15.837 + tee ssh_conf 00:01:20.025 Host vagrant 00:01:20.025 HostName 192.168.121.89 00:01:20.025 User vagrant 00:01:20.025 Port 22 00:01:20.025 UserKnownHostsFile /dev/null 00:01:20.025 StrictHostKeyChecking no 00:01:20.025 PasswordAuthentication no 00:01:20.025 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:20.025 IdentitiesOnly yes 00:01:20.025 LogLevel FATAL 00:01:20.025 ForwardAgent yes 00:01:20.025 ForwardX11 yes 00:01:20.025 00:01:20.040 [Pipeline] withEnv 00:01:20.042 [Pipeline] { 00:01:20.058 [Pipeline] sh 00:01:20.429 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:20.429 source /etc/os-release 00:01:20.429 [[ -e /image.version ]] && img=$(< /image.version) 00:01:20.429 # Minimal, systemd-like check. 00:01:20.429 if [[ -e /.dockerenv ]]; then 00:01:20.429 # Clear garbage from the node's name: 00:01:20.429 # agt-er_autotest_547-896 -> autotest_547-896 00:01:20.429 # $HOSTNAME is the actual container id 00:01:20.429 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:20.429 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:20.429 # We can assume this is a mount from a host where container is running, 00:01:20.429 # so fetch its hostname to easily identify the target swarm worker. 00:01:20.429 container="$(< /etc/hostname) ($agent)" 00:01:20.429 else 00:01:20.429 # Fallback 00:01:20.429 container=$agent 00:01:20.429 fi 00:01:20.429 fi 00:01:20.429 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:20.429 00:01:20.440 [Pipeline] } 00:01:20.458 [Pipeline] // withEnv 00:01:20.468 [Pipeline] setCustomBuildProperty 00:01:20.485 [Pipeline] stage 00:01:20.488 [Pipeline] { (Tests) 00:01:20.508 [Pipeline] sh 00:01:20.786 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:21.058 [Pipeline] sh 00:01:21.339 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:21.612 [Pipeline] timeout 00:01:21.613 Timeout set to expire in 40 min 00:01:21.615 [Pipeline] { 00:01:21.630 [Pipeline] sh 00:01:21.907 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:22.471 HEAD is now at b26ca8289 event: add enforce_numa app option 00:01:22.491 [Pipeline] sh 00:01:22.769 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:23.039 [Pipeline] sh 00:01:23.315 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:23.589 [Pipeline] sh 00:01:23.894 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:01:23.894 ++ readlink -f spdk_repo 00:01:23.894 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:23.894 + [[ -n /home/vagrant/spdk_repo ]] 00:01:23.894 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:23.894 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:23.894 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:23.894 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:23.894 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:23.894 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:01:23.894 + cd /home/vagrant/spdk_repo 00:01:23.894 + source /etc/os-release 00:01:23.894 ++ NAME='Fedora Linux' 00:01:23.894 ++ VERSION='38 (Cloud Edition)' 00:01:23.894 ++ ID=fedora 00:01:23.894 ++ VERSION_ID=38 00:01:23.894 ++ VERSION_CODENAME= 00:01:23.894 ++ PLATFORM_ID=platform:f38 00:01:23.894 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:23.894 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:23.894 ++ LOGO=fedora-logo-icon 00:01:23.894 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:23.894 ++ HOME_URL=https://fedoraproject.org/ 00:01:23.894 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:23.894 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:23.894 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:23.894 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:23.894 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:23.894 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:23.894 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:23.894 ++ SUPPORT_END=2024-05-14 00:01:23.894 ++ VARIANT='Cloud Edition' 00:01:23.894 ++ VARIANT_ID=cloud 00:01:23.894 + uname -a 00:01:23.895 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:23.895 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:24.460 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:24.460 Hugepages 00:01:24.460 node hugesize free / total 00:01:24.460 node0 1048576kB 0 / 0 00:01:24.460 node0 2048kB 0 / 0 00:01:24.460 00:01:24.460 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:24.460 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:24.460 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:24.460 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:24.460 + rm -f /tmp/spdk-ld-path 00:01:24.460 + source autorun-spdk.conf 00:01:24.460 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.460 ++ SPDK_TEST_NVMF=1 00:01:24.460 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.460 ++ SPDK_TEST_USDT=1 00:01:24.460 ++ SPDK_TEST_NVMF_MDNS=1 00:01:24.460 ++ SPDK_RUN_UBSAN=1 00:01:24.460 ++ NET_TYPE=virt 00:01:24.460 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:24.460 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.460 ++ RUN_NIGHTLY=0 00:01:24.460 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:24.460 + [[ -n '' ]] 00:01:24.460 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:24.716 + for M in /var/spdk/build-*-manifest.txt 00:01:24.716 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:24.716 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:24.716 + for M in /var/spdk/build-*-manifest.txt 00:01:24.716 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:24.716 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:24.716 ++ uname 00:01:24.716 + [[ Linux == \L\i\n\u\x ]] 00:01:24.716 + sudo dmesg -T 00:01:24.716 + sudo dmesg --clear 00:01:24.716 + dmesg_pid=5166 00:01:24.717 + [[ Fedora Linux == FreeBSD ]] 00:01:24.717 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.717 + sudo dmesg -Tw 00:01:24.717 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.717 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:24.717 + [[ -x /usr/src/fio-static/fio ]] 00:01:24.717 + export FIO_BIN=/usr/src/fio-static/fio 00:01:24.717 + FIO_BIN=/usr/src/fio-static/fio 00:01:24.717 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:24.717 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:24.717 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:24.717 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.717 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.717 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:24.717 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.717 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.717 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:24.717 Test configuration: 00:01:24.717 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.717 SPDK_TEST_NVMF=1 00:01:24.717 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.717 SPDK_TEST_USDT=1 00:01:24.717 SPDK_TEST_NVMF_MDNS=1 00:01:24.717 SPDK_RUN_UBSAN=1 00:01:24.717 NET_TYPE=virt 00:01:24.717 SPDK_JSONRPC_GO_CLIENT=1 00:01:24.717 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.717 RUN_NIGHTLY=0 19:17:14 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:24.717 19:17:14 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:24.717 19:17:14 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:24.717 19:17:14 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:24.717 19:17:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.717 19:17:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.717 19:17:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.717 19:17:14 -- paths/export.sh@5 -- $ export PATH 00:01:24.717 19:17:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.717 19:17:14 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:24.717 19:17:14 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:24.717 19:17:14 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721071034.XXXXXX 00:01:24.717 19:17:14 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721071034.QJJR4I 00:01:24.717 19:17:14 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:24.717 19:17:14 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:24.717 19:17:14 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:24.717 19:17:14 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:24.717 19:17:14 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:24.717 19:17:14 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:24.717 19:17:14 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:24.717 19:17:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.717 19:17:14 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:01:24.717 19:17:14 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:24.717 19:17:14 -- pm/common@17 -- $ local monitor 00:01:24.717 19:17:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.717 19:17:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.717 19:17:14 -- pm/common@25 -- $ sleep 1 00:01:24.717 19:17:14 -- pm/common@21 -- $ date +%s 00:01:24.717 19:17:14 -- pm/common@21 -- $ date +%s 00:01:24.717 19:17:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721071034 00:01:24.717 19:17:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721071034 00:01:24.717 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721071034_collect-vmstat.pm.log 00:01:24.717 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721071034_collect-cpu-load.pm.log 00:01:26.090 19:17:15 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:26.090 19:17:15 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:26.090 19:17:15 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:26.090 19:17:15 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:26.090 19:17:15 -- spdk/autobuild.sh@16 -- $ date -u 00:01:26.090 Mon Jul 15 07:17:15 PM UTC 2024 00:01:26.090 19:17:15 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:26.090 v24.09-pre-229-gb26ca8289 00:01:26.090 19:17:15 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:26.090 19:17:15 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:26.090 19:17:15 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:26.090 19:17:15 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:26.090 19:17:15 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:26.090 19:17:15 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.090 ************************************ 00:01:26.090 START TEST ubsan 00:01:26.090 ************************************ 00:01:26.090 using ubsan 00:01:26.090 19:17:15 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:26.090 00:01:26.090 real 0m0.000s 00:01:26.090 user 0m0.000s 00:01:26.090 sys 0m0.000s 00:01:26.090 19:17:15 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:26.090 ************************************ 00:01:26.090 END TEST ubsan 00:01:26.090 19:17:15 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:26.090 ************************************ 00:01:26.090 19:17:15 -- common/autotest_common.sh@1142 -- $ return 0 00:01:26.090 19:17:15 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:26.090 19:17:15 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:26.090 19:17:15 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:26.090 19:17:15 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:26.090 19:17:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:26.090 19:17:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:26.090 19:17:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:26.090 19:17:15 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:26.090 19:17:15 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:01:26.090 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:26.090 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:26.348 Using 'verbs' RDMA provider 00:01:39.478 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:54.345 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:54.345 go version go1.21.1 linux/amd64 00:01:54.345 Creating mk/config.mk...done. 00:01:54.345 Creating mk/cc.flags.mk...done. 00:01:54.345 Type 'make' to build. 00:01:54.345 19:17:42 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:54.345 19:17:42 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:54.345 19:17:42 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:54.345 19:17:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.345 ************************************ 00:01:54.345 START TEST make 00:01:54.345 ************************************ 00:01:54.345 19:17:42 make -- common/autotest_common.sh@1123 -- $ make -j10 00:01:54.345 make[1]: Nothing to be done for 'all'. 00:02:06.546 The Meson build system 00:02:06.546 Version: 1.3.1 00:02:06.546 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:06.546 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:06.546 Build type: native build 00:02:06.546 Program cat found: YES (/usr/bin/cat) 00:02:06.546 Project name: DPDK 00:02:06.546 Project version: 24.03.0 00:02:06.546 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:06.546 C linker for the host machine: cc ld.bfd 2.39-16 00:02:06.546 Host machine cpu family: x86_64 00:02:06.546 Host machine cpu: x86_64 00:02:06.546 Message: ## Building in Developer Mode ## 00:02:06.546 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:06.546 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:06.546 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:06.546 Program python3 found: YES (/usr/bin/python3) 00:02:06.546 Program cat found: YES (/usr/bin/cat) 00:02:06.546 Compiler for C supports arguments -march=native: YES 00:02:06.546 Checking for size of "void *" : 8 00:02:06.546 Checking for size of "void *" : 8 (cached) 00:02:06.546 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:06.546 Library m found: YES 00:02:06.546 Library numa found: YES 00:02:06.546 Has header "numaif.h" : YES 00:02:06.546 Library fdt found: NO 00:02:06.546 Library execinfo found: NO 00:02:06.546 Has header "execinfo.h" : YES 00:02:06.546 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:06.546 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:06.546 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:06.546 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:06.546 Run-time dependency openssl found: YES 3.0.9 00:02:06.546 Run-time dependency libpcap found: YES 1.10.4 00:02:06.546 Has header "pcap.h" with dependency libpcap: YES 00:02:06.546 Compiler for C supports arguments -Wcast-qual: YES 00:02:06.546 Compiler for C supports arguments -Wdeprecated: YES 00:02:06.546 Compiler for C supports arguments -Wformat: YES 00:02:06.546 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:06.546 Compiler for C supports arguments -Wformat-security: NO 00:02:06.546 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:06.546 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:06.546 Compiler for C supports arguments -Wnested-externs: YES 00:02:06.546 Compiler for C supports arguments -Wold-style-definition: YES 00:02:06.546 Compiler for C supports arguments -Wpointer-arith: YES 00:02:06.546 Compiler for C supports arguments -Wsign-compare: YES 00:02:06.546 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:06.546 Compiler for C supports arguments -Wundef: YES 00:02:06.546 Compiler for C supports arguments -Wwrite-strings: YES 00:02:06.546 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:06.546 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:06.546 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:06.546 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:06.546 Program objdump found: YES (/usr/bin/objdump) 00:02:06.546 Compiler for C supports arguments -mavx512f: YES 00:02:06.546 Checking if "AVX512 checking" compiles: YES 00:02:06.546 Fetching value of define "__SSE4_2__" : 1 00:02:06.546 Fetching value of define "__AES__" : 1 00:02:06.546 Fetching value of define "__AVX__" : 1 00:02:06.546 Fetching value of define "__AVX2__" : 1 00:02:06.546 Fetching value of define "__AVX512BW__" : (undefined) 00:02:06.546 Fetching value of define "__AVX512CD__" : (undefined) 00:02:06.546 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:06.546 Fetching value of define "__AVX512F__" : (undefined) 00:02:06.546 Fetching value of define "__AVX512VL__" : (undefined) 00:02:06.546 Fetching value of define "__PCLMUL__" : 1 00:02:06.546 Fetching value of define "__RDRND__" : 1 00:02:06.546 Fetching value of define "__RDSEED__" : 1 00:02:06.546 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:06.546 Fetching value of define "__znver1__" : (undefined) 00:02:06.546 Fetching value of define "__znver2__" : (undefined) 00:02:06.546 Fetching value of define "__znver3__" : (undefined) 00:02:06.546 Fetching value of define "__znver4__" : (undefined) 00:02:06.546 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:06.546 Message: lib/log: Defining dependency "log" 00:02:06.546 Message: lib/kvargs: Defining dependency "kvargs" 00:02:06.546 Message: lib/telemetry: Defining dependency "telemetry" 00:02:06.546 Checking for function "getentropy" : NO 00:02:06.546 Message: lib/eal: Defining dependency "eal" 00:02:06.546 Message: lib/ring: Defining dependency "ring" 00:02:06.546 Message: lib/rcu: Defining dependency "rcu" 00:02:06.546 Message: lib/mempool: Defining dependency "mempool" 00:02:06.546 Message: lib/mbuf: Defining dependency "mbuf" 00:02:06.546 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:06.546 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:06.546 Compiler for C supports arguments -mpclmul: YES 00:02:06.546 Compiler for C supports arguments -maes: YES 00:02:06.546 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:06.546 Compiler for C supports arguments -mavx512bw: YES 00:02:06.546 Compiler for C supports arguments -mavx512dq: YES 00:02:06.546 Compiler for C supports arguments -mavx512vl: YES 00:02:06.546 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:06.546 Compiler for C supports arguments -mavx2: YES 00:02:06.546 Compiler for C supports arguments -mavx: YES 00:02:06.546 Message: lib/net: Defining dependency "net" 00:02:06.546 Message: lib/meter: Defining dependency "meter" 00:02:06.546 Message: lib/ethdev: Defining dependency "ethdev" 00:02:06.546 Message: lib/pci: Defining dependency "pci" 00:02:06.546 Message: lib/cmdline: Defining dependency "cmdline" 00:02:06.546 Message: lib/hash: Defining dependency "hash" 00:02:06.546 Message: lib/timer: Defining dependency "timer" 00:02:06.546 Message: lib/compressdev: Defining dependency "compressdev" 00:02:06.546 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:06.546 Message: lib/dmadev: Defining dependency "dmadev" 00:02:06.546 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:06.546 Message: lib/power: Defining dependency "power" 00:02:06.546 Message: lib/reorder: Defining dependency "reorder" 00:02:06.546 Message: lib/security: Defining dependency "security" 00:02:06.546 Has header "linux/userfaultfd.h" : YES 00:02:06.546 Has header "linux/vduse.h" : YES 00:02:06.546 Message: lib/vhost: Defining dependency "vhost" 00:02:06.546 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:06.546 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:06.546 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:06.546 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:06.546 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:06.546 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:06.546 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:06.546 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:06.546 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:06.546 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:06.546 Program doxygen found: YES (/usr/bin/doxygen) 00:02:06.546 Configuring doxy-api-html.conf using configuration 00:02:06.546 Configuring doxy-api-man.conf using configuration 00:02:06.546 Program mandb found: YES (/usr/bin/mandb) 00:02:06.546 Program sphinx-build found: NO 00:02:06.546 Configuring rte_build_config.h using configuration 00:02:06.546 Message: 00:02:06.546 ================= 00:02:06.546 Applications Enabled 00:02:06.546 ================= 00:02:06.546 00:02:06.546 apps: 00:02:06.546 00:02:06.546 00:02:06.546 Message: 00:02:06.546 ================= 00:02:06.546 Libraries Enabled 00:02:06.546 ================= 00:02:06.546 00:02:06.546 libs: 00:02:06.546 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:06.546 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:06.546 cryptodev, dmadev, power, reorder, security, vhost, 00:02:06.546 00:02:06.546 Message: 00:02:06.546 =============== 00:02:06.546 Drivers Enabled 00:02:06.546 =============== 00:02:06.546 00:02:06.546 common: 00:02:06.546 00:02:06.546 bus: 00:02:06.546 pci, vdev, 00:02:06.546 mempool: 00:02:06.546 ring, 00:02:06.546 dma: 00:02:06.546 00:02:06.546 net: 00:02:06.546 00:02:06.546 crypto: 00:02:06.546 00:02:06.546 compress: 00:02:06.546 00:02:06.546 vdpa: 00:02:06.546 00:02:06.546 00:02:06.546 Message: 00:02:06.546 ================= 00:02:06.546 Content Skipped 00:02:06.546 ================= 00:02:06.546 00:02:06.546 apps: 00:02:06.546 dumpcap: explicitly disabled via build config 00:02:06.546 graph: explicitly disabled via build config 00:02:06.546 pdump: explicitly disabled via build config 00:02:06.546 proc-info: explicitly disabled via build config 00:02:06.546 test-acl: explicitly disabled via build config 00:02:06.546 test-bbdev: explicitly disabled via build config 00:02:06.546 test-cmdline: explicitly disabled via build config 00:02:06.546 test-compress-perf: explicitly disabled via build config 00:02:06.546 test-crypto-perf: explicitly disabled via build config 00:02:06.546 test-dma-perf: explicitly disabled via build config 00:02:06.546 test-eventdev: explicitly disabled via build config 00:02:06.546 test-fib: explicitly disabled via build config 00:02:06.546 test-flow-perf: explicitly disabled via build config 00:02:06.546 test-gpudev: explicitly disabled via build config 00:02:06.546 test-mldev: explicitly disabled via build config 00:02:06.546 test-pipeline: explicitly disabled via build config 00:02:06.547 test-pmd: explicitly disabled via build config 00:02:06.547 test-regex: explicitly disabled via build config 00:02:06.547 test-sad: explicitly disabled via build config 00:02:06.547 test-security-perf: explicitly disabled via build config 00:02:06.547 00:02:06.547 libs: 00:02:06.547 argparse: explicitly disabled via build config 00:02:06.547 metrics: explicitly disabled via build config 00:02:06.547 acl: explicitly disabled via build config 00:02:06.547 bbdev: explicitly disabled via build config 00:02:06.547 bitratestats: explicitly disabled via build config 00:02:06.547 bpf: explicitly disabled via build config 00:02:06.547 cfgfile: explicitly disabled via build config 00:02:06.547 distributor: explicitly disabled via build config 00:02:06.547 efd: explicitly disabled via build config 00:02:06.547 eventdev: explicitly disabled via build config 00:02:06.547 dispatcher: explicitly disabled via build config 00:02:06.547 gpudev: explicitly disabled via build config 00:02:06.547 gro: explicitly disabled via build config 00:02:06.547 gso: explicitly disabled via build config 00:02:06.547 ip_frag: explicitly disabled via build config 00:02:06.547 jobstats: explicitly disabled via build config 00:02:06.547 latencystats: explicitly disabled via build config 00:02:06.547 lpm: explicitly disabled via build config 00:02:06.547 member: explicitly disabled via build config 00:02:06.547 pcapng: explicitly disabled via build config 00:02:06.547 rawdev: explicitly disabled via build config 00:02:06.547 regexdev: explicitly disabled via build config 00:02:06.547 mldev: explicitly disabled via build config 00:02:06.547 rib: explicitly disabled via build config 00:02:06.547 sched: explicitly disabled via build config 00:02:06.547 stack: explicitly disabled via build config 00:02:06.547 ipsec: explicitly disabled via build config 00:02:06.547 pdcp: explicitly disabled via build config 00:02:06.547 fib: explicitly disabled via build config 00:02:06.547 port: explicitly disabled via build config 00:02:06.547 pdump: explicitly disabled via build config 00:02:06.547 table: explicitly disabled via build config 00:02:06.547 pipeline: explicitly disabled via build config 00:02:06.547 graph: explicitly disabled via build config 00:02:06.547 node: explicitly disabled via build config 00:02:06.547 00:02:06.547 drivers: 00:02:06.547 common/cpt: not in enabled drivers build config 00:02:06.547 common/dpaax: not in enabled drivers build config 00:02:06.547 common/iavf: not in enabled drivers build config 00:02:06.547 common/idpf: not in enabled drivers build config 00:02:06.547 common/ionic: not in enabled drivers build config 00:02:06.547 common/mvep: not in enabled drivers build config 00:02:06.547 common/octeontx: not in enabled drivers build config 00:02:06.547 bus/auxiliary: not in enabled drivers build config 00:02:06.547 bus/cdx: not in enabled drivers build config 00:02:06.547 bus/dpaa: not in enabled drivers build config 00:02:06.547 bus/fslmc: not in enabled drivers build config 00:02:06.547 bus/ifpga: not in enabled drivers build config 00:02:06.547 bus/platform: not in enabled drivers build config 00:02:06.547 bus/uacce: not in enabled drivers build config 00:02:06.547 bus/vmbus: not in enabled drivers build config 00:02:06.547 common/cnxk: not in enabled drivers build config 00:02:06.547 common/mlx5: not in enabled drivers build config 00:02:06.547 common/nfp: not in enabled drivers build config 00:02:06.547 common/nitrox: not in enabled drivers build config 00:02:06.547 common/qat: not in enabled drivers build config 00:02:06.547 common/sfc_efx: not in enabled drivers build config 00:02:06.547 mempool/bucket: not in enabled drivers build config 00:02:06.547 mempool/cnxk: not in enabled drivers build config 00:02:06.547 mempool/dpaa: not in enabled drivers build config 00:02:06.547 mempool/dpaa2: not in enabled drivers build config 00:02:06.547 mempool/octeontx: not in enabled drivers build config 00:02:06.547 mempool/stack: not in enabled drivers build config 00:02:06.547 dma/cnxk: not in enabled drivers build config 00:02:06.547 dma/dpaa: not in enabled drivers build config 00:02:06.547 dma/dpaa2: not in enabled drivers build config 00:02:06.547 dma/hisilicon: not in enabled drivers build config 00:02:06.547 dma/idxd: not in enabled drivers build config 00:02:06.547 dma/ioat: not in enabled drivers build config 00:02:06.547 dma/skeleton: not in enabled drivers build config 00:02:06.547 net/af_packet: not in enabled drivers build config 00:02:06.547 net/af_xdp: not in enabled drivers build config 00:02:06.547 net/ark: not in enabled drivers build config 00:02:06.547 net/atlantic: not in enabled drivers build config 00:02:06.547 net/avp: not in enabled drivers build config 00:02:06.547 net/axgbe: not in enabled drivers build config 00:02:06.547 net/bnx2x: not in enabled drivers build config 00:02:06.547 net/bnxt: not in enabled drivers build config 00:02:06.547 net/bonding: not in enabled drivers build config 00:02:06.547 net/cnxk: not in enabled drivers build config 00:02:06.547 net/cpfl: not in enabled drivers build config 00:02:06.547 net/cxgbe: not in enabled drivers build config 00:02:06.547 net/dpaa: not in enabled drivers build config 00:02:06.547 net/dpaa2: not in enabled drivers build config 00:02:06.547 net/e1000: not in enabled drivers build config 00:02:06.547 net/ena: not in enabled drivers build config 00:02:06.547 net/enetc: not in enabled drivers build config 00:02:06.547 net/enetfec: not in enabled drivers build config 00:02:06.547 net/enic: not in enabled drivers build config 00:02:06.547 net/failsafe: not in enabled drivers build config 00:02:06.547 net/fm10k: not in enabled drivers build config 00:02:06.547 net/gve: not in enabled drivers build config 00:02:06.547 net/hinic: not in enabled drivers build config 00:02:06.547 net/hns3: not in enabled drivers build config 00:02:06.547 net/i40e: not in enabled drivers build config 00:02:06.547 net/iavf: not in enabled drivers build config 00:02:06.547 net/ice: not in enabled drivers build config 00:02:06.547 net/idpf: not in enabled drivers build config 00:02:06.547 net/igc: not in enabled drivers build config 00:02:06.547 net/ionic: not in enabled drivers build config 00:02:06.547 net/ipn3ke: not in enabled drivers build config 00:02:06.547 net/ixgbe: not in enabled drivers build config 00:02:06.547 net/mana: not in enabled drivers build config 00:02:06.547 net/memif: not in enabled drivers build config 00:02:06.547 net/mlx4: not in enabled drivers build config 00:02:06.547 net/mlx5: not in enabled drivers build config 00:02:06.547 net/mvneta: not in enabled drivers build config 00:02:06.547 net/mvpp2: not in enabled drivers build config 00:02:06.547 net/netvsc: not in enabled drivers build config 00:02:06.547 net/nfb: not in enabled drivers build config 00:02:06.547 net/nfp: not in enabled drivers build config 00:02:06.547 net/ngbe: not in enabled drivers build config 00:02:06.547 net/null: not in enabled drivers build config 00:02:06.547 net/octeontx: not in enabled drivers build config 00:02:06.547 net/octeon_ep: not in enabled drivers build config 00:02:06.547 net/pcap: not in enabled drivers build config 00:02:06.547 net/pfe: not in enabled drivers build config 00:02:06.547 net/qede: not in enabled drivers build config 00:02:06.547 net/ring: not in enabled drivers build config 00:02:06.547 net/sfc: not in enabled drivers build config 00:02:06.547 net/softnic: not in enabled drivers build config 00:02:06.547 net/tap: not in enabled drivers build config 00:02:06.547 net/thunderx: not in enabled drivers build config 00:02:06.547 net/txgbe: not in enabled drivers build config 00:02:06.547 net/vdev_netvsc: not in enabled drivers build config 00:02:06.547 net/vhost: not in enabled drivers build config 00:02:06.547 net/virtio: not in enabled drivers build config 00:02:06.547 net/vmxnet3: not in enabled drivers build config 00:02:06.547 raw/*: missing internal dependency, "rawdev" 00:02:06.547 crypto/armv8: not in enabled drivers build config 00:02:06.547 crypto/bcmfs: not in enabled drivers build config 00:02:06.547 crypto/caam_jr: not in enabled drivers build config 00:02:06.547 crypto/ccp: not in enabled drivers build config 00:02:06.547 crypto/cnxk: not in enabled drivers build config 00:02:06.547 crypto/dpaa_sec: not in enabled drivers build config 00:02:06.547 crypto/dpaa2_sec: not in enabled drivers build config 00:02:06.547 crypto/ipsec_mb: not in enabled drivers build config 00:02:06.547 crypto/mlx5: not in enabled drivers build config 00:02:06.547 crypto/mvsam: not in enabled drivers build config 00:02:06.547 crypto/nitrox: not in enabled drivers build config 00:02:06.547 crypto/null: not in enabled drivers build config 00:02:06.547 crypto/octeontx: not in enabled drivers build config 00:02:06.547 crypto/openssl: not in enabled drivers build config 00:02:06.547 crypto/scheduler: not in enabled drivers build config 00:02:06.547 crypto/uadk: not in enabled drivers build config 00:02:06.547 crypto/virtio: not in enabled drivers build config 00:02:06.547 compress/isal: not in enabled drivers build config 00:02:06.547 compress/mlx5: not in enabled drivers build config 00:02:06.547 compress/nitrox: not in enabled drivers build config 00:02:06.547 compress/octeontx: not in enabled drivers build config 00:02:06.547 compress/zlib: not in enabled drivers build config 00:02:06.547 regex/*: missing internal dependency, "regexdev" 00:02:06.547 ml/*: missing internal dependency, "mldev" 00:02:06.547 vdpa/ifc: not in enabled drivers build config 00:02:06.547 vdpa/mlx5: not in enabled drivers build config 00:02:06.547 vdpa/nfp: not in enabled drivers build config 00:02:06.547 vdpa/sfc: not in enabled drivers build config 00:02:06.547 event/*: missing internal dependency, "eventdev" 00:02:06.547 baseband/*: missing internal dependency, "bbdev" 00:02:06.547 gpu/*: missing internal dependency, "gpudev" 00:02:06.547 00:02:06.547 00:02:06.547 Build targets in project: 85 00:02:06.547 00:02:06.547 DPDK 24.03.0 00:02:06.547 00:02:06.547 User defined options 00:02:06.547 buildtype : debug 00:02:06.547 default_library : shared 00:02:06.547 libdir : lib 00:02:06.547 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:06.547 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:06.547 c_link_args : 00:02:06.547 cpu_instruction_set: native 00:02:06.547 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:06.547 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:06.547 enable_docs : false 00:02:06.547 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:06.547 enable_kmods : false 00:02:06.547 max_lcores : 128 00:02:06.547 tests : false 00:02:06.547 00:02:06.547 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:07.114 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:07.114 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:07.114 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:07.114 [3/268] Linking static target lib/librte_kvargs.a 00:02:07.114 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:07.114 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:07.114 [6/268] Linking static target lib/librte_log.a 00:02:07.679 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.679 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:07.937 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:07.937 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:07.937 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:07.937 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:08.195 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:08.195 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:08.195 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:08.195 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:08.195 [17/268] Linking static target lib/librte_telemetry.a 00:02:08.195 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:08.195 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.461 [20/268] Linking target lib/librte_log.so.24.1 00:02:08.719 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:08.719 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:08.719 [23/268] Linking target lib/librte_kvargs.so.24.1 00:02:08.976 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:08.976 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:08.976 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:08.976 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:08.976 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:09.234 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:09.234 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.234 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:09.234 [32/268] Linking target lib/librte_telemetry.so.24.1 00:02:09.234 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:09.234 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:09.234 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:09.492 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:09.492 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:09.750 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:10.007 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:10.007 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:10.007 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:10.007 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:10.007 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:10.265 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:10.265 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:10.265 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:10.265 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:10.265 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:10.524 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:10.524 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:10.782 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:10.782 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:11.041 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:11.041 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:11.041 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:11.299 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:11.299 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:11.299 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:11.299 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:11.299 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:11.299 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:11.557 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:11.816 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:11.816 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:11.817 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:11.817 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:12.074 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:12.331 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:12.331 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:12.331 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:12.331 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:12.589 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:12.589 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:12.589 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:12.589 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:12.846 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:12.846 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:12.846 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:13.103 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:13.103 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:13.103 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:13.360 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:13.360 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:13.616 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:13.616 [85/268] Linking static target lib/librte_ring.a 00:02:13.873 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:13.873 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:13.873 [88/268] Linking static target lib/librte_eal.a 00:02:13.873 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:14.130 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:14.130 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:14.130 [92/268] Linking static target lib/librte_rcu.a 00:02:14.388 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:14.388 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:14.388 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:14.388 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.388 [97/268] Linking static target lib/librte_mempool.a 00:02:14.388 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:14.645 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:14.645 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:14.645 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:14.645 [102/268] Linking static target lib/librte_mbuf.a 00:02:14.903 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.161 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:15.420 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:15.420 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:15.420 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:15.420 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:15.420 [109/268] Linking static target lib/librte_meter.a 00:02:15.420 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:15.420 [111/268] Linking static target lib/librte_net.a 00:02:15.678 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.678 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:15.678 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:15.936 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.936 [116/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.936 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.936 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:15.936 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:16.500 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:16.757 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:16.757 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:16.757 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:17.014 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:17.014 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:17.014 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:17.014 [127/268] Linking static target lib/librte_pci.a 00:02:17.014 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:17.272 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:17.272 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:17.272 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:17.272 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:17.272 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:17.272 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:17.531 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:17.531 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.531 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:17.531 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:17.531 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:17.531 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:17.531 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:17.531 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:17.531 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:17.531 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:17.788 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:17.788 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:18.045 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:18.045 [148/268] Linking static target lib/librte_ethdev.a 00:02:18.045 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:18.045 [150/268] Linking static target lib/librte_cmdline.a 00:02:18.306 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:18.306 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:18.596 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:18.596 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:18.596 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:18.596 [156/268] Linking static target lib/librte_timer.a 00:02:18.596 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:18.856 [158/268] Linking static target lib/librte_hash.a 00:02:18.857 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:18.857 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:18.857 [161/268] Linking static target lib/librte_compressdev.a 00:02:18.857 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:19.422 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.423 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:19.423 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:19.680 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:19.680 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:19.680 [168/268] Linking static target lib/librte_dmadev.a 00:02:19.680 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:19.938 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:19.938 [171/268] Linking static target lib/librte_cryptodev.a 00:02:19.938 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.938 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.939 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.939 [175/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:19.939 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:20.196 [177/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:20.454 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:20.454 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.712 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:20.712 [181/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:20.712 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:20.712 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:20.712 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:20.969 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:20.969 [186/268] Linking static target lib/librte_power.a 00:02:21.227 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:21.227 [188/268] Linking static target lib/librte_reorder.a 00:02:21.485 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:21.485 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:21.485 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:21.485 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:21.485 [193/268] Linking static target lib/librte_security.a 00:02:21.742 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.001 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:22.259 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.259 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.259 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:22.259 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.517 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:22.517 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:22.517 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:22.775 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:23.033 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:23.033 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:23.033 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:23.033 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:23.033 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:23.033 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:23.290 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:23.290 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:23.290 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:23.290 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:23.548 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:23.548 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.548 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.548 [217/268] Linking static target drivers/librte_bus_pci.a 00:02:23.548 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.548 [219/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.548 [220/268] Linking static target drivers/librte_bus_vdev.a 00:02:23.548 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:23.548 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:23.805 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.805 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:24.062 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.062 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.062 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:24.062 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.044 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:25.044 [230/268] Linking static target lib/librte_vhost.a 00:02:25.622 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.622 [232/268] Linking target lib/librte_eal.so.24.1 00:02:25.879 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:25.879 [234/268] Linking target lib/librte_pci.so.24.1 00:02:25.879 [235/268] Linking target lib/librte_meter.so.24.1 00:02:25.879 [236/268] Linking target lib/librte_ring.so.24.1 00:02:25.879 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:25.879 [238/268] Linking target lib/librte_timer.so.24.1 00:02:25.879 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:26.136 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:26.136 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:26.136 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:26.136 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:26.136 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:26.136 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:26.136 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:26.136 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:26.136 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.136 [249/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.136 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:26.393 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:26.393 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:26.393 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:26.393 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:26.651 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:26.651 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:26.651 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:26.651 [258/268] Linking target lib/librte_net.so.24.1 00:02:26.651 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:26.651 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:26.651 [261/268] Linking target lib/librte_security.so.24.1 00:02:26.651 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:26.651 [263/268] Linking target lib/librte_hash.so.24.1 00:02:26.651 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:26.908 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:26.908 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:26.908 [267/268] Linking target lib/librte_power.so.24.1 00:02:26.908 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:26.908 INFO: autodetecting backend as ninja 00:02:26.908 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:28.277 CC lib/ut/ut.o 00:02:28.277 CC lib/log/log.o 00:02:28.277 CC lib/log/log_deprecated.o 00:02:28.277 CC lib/ut_mock/mock.o 00:02:28.277 CC lib/log/log_flags.o 00:02:28.277 LIB libspdk_log.a 00:02:28.277 SO libspdk_log.so.7.0 00:02:28.277 LIB libspdk_ut.a 00:02:28.277 LIB libspdk_ut_mock.a 00:02:28.277 SO libspdk_ut.so.2.0 00:02:28.277 SO libspdk_ut_mock.so.6.0 00:02:28.277 SYMLINK libspdk_log.so 00:02:28.277 SYMLINK libspdk_ut.so 00:02:28.535 SYMLINK libspdk_ut_mock.so 00:02:28.535 CC lib/dma/dma.o 00:02:28.535 CC lib/ioat/ioat.o 00:02:28.535 CXX lib/trace_parser/trace.o 00:02:28.535 CC lib/util/bit_array.o 00:02:28.535 CC lib/util/base64.o 00:02:28.535 CC lib/util/crc16.o 00:02:28.535 CC lib/util/cpuset.o 00:02:28.535 CC lib/util/crc32.o 00:02:28.535 CC lib/util/crc32c.o 00:02:28.793 CC lib/vfio_user/host/vfio_user_pci.o 00:02:28.793 CC lib/util/crc32_ieee.o 00:02:28.793 LIB libspdk_dma.a 00:02:28.793 SO libspdk_dma.so.4.0 00:02:28.793 CC lib/vfio_user/host/vfio_user.o 00:02:28.793 CC lib/util/crc64.o 00:02:28.793 SYMLINK libspdk_dma.so 00:02:28.793 CC lib/util/dif.o 00:02:28.793 CC lib/util/fd.o 00:02:29.051 CC lib/util/fd_group.o 00:02:29.051 CC lib/util/file.o 00:02:29.051 CC lib/util/hexlify.o 00:02:29.051 CC lib/util/iov.o 00:02:29.051 CC lib/util/math.o 00:02:29.051 LIB libspdk_ioat.a 00:02:29.051 SO libspdk_ioat.so.7.0 00:02:29.051 LIB libspdk_vfio_user.a 00:02:29.051 CC lib/util/net.o 00:02:29.309 SO libspdk_vfio_user.so.5.0 00:02:29.309 SYMLINK libspdk_ioat.so 00:02:29.309 CC lib/util/pipe.o 00:02:29.309 CC lib/util/strerror_tls.o 00:02:29.309 CC lib/util/string.o 00:02:29.309 SYMLINK libspdk_vfio_user.so 00:02:29.309 CC lib/util/xor.o 00:02:29.309 CC lib/util/uuid.o 00:02:29.309 CC lib/util/zipf.o 00:02:29.568 LIB libspdk_trace_parser.a 00:02:29.826 LIB libspdk_util.a 00:02:29.826 SO libspdk_trace_parser.so.5.0 00:02:29.826 SYMLINK libspdk_trace_parser.so 00:02:29.826 SO libspdk_util.so.9.1 00:02:30.084 SYMLINK libspdk_util.so 00:02:30.342 CC lib/vmd/vmd.o 00:02:30.342 CC lib/vmd/led.o 00:02:30.342 CC lib/env_dpdk/env.o 00:02:30.342 CC lib/rdma_utils/rdma_utils.o 00:02:30.342 CC lib/idxd/idxd.o 00:02:30.342 CC lib/env_dpdk/memory.o 00:02:30.342 CC lib/env_dpdk/pci.o 00:02:30.342 CC lib/json/json_parse.o 00:02:30.342 CC lib/rdma_provider/common.o 00:02:30.342 CC lib/conf/conf.o 00:02:30.342 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:30.600 LIB libspdk_conf.a 00:02:30.600 SO libspdk_conf.so.6.0 00:02:30.600 CC lib/json/json_util.o 00:02:30.600 SYMLINK libspdk_conf.so 00:02:30.600 CC lib/json/json_write.o 00:02:30.600 LIB libspdk_rdma_provider.a 00:02:30.600 CC lib/idxd/idxd_user.o 00:02:30.600 SO libspdk_rdma_provider.so.6.0 00:02:30.600 LIB libspdk_rdma_utils.a 00:02:30.858 SO libspdk_rdma_utils.so.1.0 00:02:30.858 SYMLINK libspdk_rdma_provider.so 00:02:30.858 CC lib/idxd/idxd_kernel.o 00:02:30.858 SYMLINK libspdk_rdma_utils.so 00:02:30.858 CC lib/env_dpdk/init.o 00:02:30.858 CC lib/env_dpdk/threads.o 00:02:30.858 CC lib/env_dpdk/pci_ioat.o 00:02:30.858 LIB libspdk_json.a 00:02:30.858 LIB libspdk_vmd.a 00:02:31.116 SO libspdk_json.so.6.0 00:02:31.116 SO libspdk_vmd.so.6.0 00:02:31.116 CC lib/env_dpdk/pci_virtio.o 00:02:31.116 CC lib/env_dpdk/pci_vmd.o 00:02:31.116 SYMLINK libspdk_json.so 00:02:31.116 CC lib/env_dpdk/pci_idxd.o 00:02:31.116 SYMLINK libspdk_vmd.so 00:02:31.116 CC lib/env_dpdk/pci_event.o 00:02:31.116 CC lib/env_dpdk/sigbus_handler.o 00:02:31.116 CC lib/env_dpdk/pci_dpdk.o 00:02:31.116 LIB libspdk_idxd.a 00:02:31.374 SO libspdk_idxd.so.12.0 00:02:31.374 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:31.374 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:31.374 CC lib/jsonrpc/jsonrpc_server.o 00:02:31.374 SYMLINK libspdk_idxd.so 00:02:31.374 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:31.374 CC lib/jsonrpc/jsonrpc_client.o 00:02:31.374 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:31.634 LIB libspdk_jsonrpc.a 00:02:31.895 SO libspdk_jsonrpc.so.6.0 00:02:31.895 SYMLINK libspdk_jsonrpc.so 00:02:31.895 LIB libspdk_env_dpdk.a 00:02:32.153 CC lib/rpc/rpc.o 00:02:32.153 SO libspdk_env_dpdk.so.15.0 00:02:32.411 SYMLINK libspdk_env_dpdk.so 00:02:32.411 LIB libspdk_rpc.a 00:02:32.411 SO libspdk_rpc.so.6.0 00:02:32.411 SYMLINK libspdk_rpc.so 00:02:32.669 CC lib/keyring/keyring.o 00:02:32.669 CC lib/keyring/keyring_rpc.o 00:02:32.669 CC lib/notify/notify.o 00:02:32.669 CC lib/notify/notify_rpc.o 00:02:32.669 CC lib/trace/trace.o 00:02:32.669 CC lib/trace/trace_flags.o 00:02:32.669 CC lib/trace/trace_rpc.o 00:02:32.927 LIB libspdk_notify.a 00:02:32.927 SO libspdk_notify.so.6.0 00:02:32.927 LIB libspdk_keyring.a 00:02:32.927 SYMLINK libspdk_notify.so 00:02:32.927 LIB libspdk_trace.a 00:02:32.927 SO libspdk_keyring.so.1.0 00:02:33.185 SO libspdk_trace.so.10.0 00:02:33.185 SYMLINK libspdk_keyring.so 00:02:33.185 SYMLINK libspdk_trace.so 00:02:33.443 CC lib/thread/thread.o 00:02:33.443 CC lib/thread/iobuf.o 00:02:33.443 CC lib/sock/sock.o 00:02:33.443 CC lib/sock/sock_rpc.o 00:02:34.007 LIB libspdk_sock.a 00:02:34.007 SO libspdk_sock.so.10.0 00:02:34.007 SYMLINK libspdk_sock.so 00:02:34.265 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:34.265 CC lib/nvme/nvme_ctrlr.o 00:02:34.265 CC lib/nvme/nvme_ns_cmd.o 00:02:34.265 CC lib/nvme/nvme_ns.o 00:02:34.265 CC lib/nvme/nvme_fabric.o 00:02:34.265 CC lib/nvme/nvme_pcie_common.o 00:02:34.265 CC lib/nvme/nvme_pcie.o 00:02:34.265 CC lib/nvme/nvme_qpair.o 00:02:34.265 CC lib/nvme/nvme.o 00:02:35.198 CC lib/nvme/nvme_quirks.o 00:02:35.455 LIB libspdk_thread.a 00:02:35.455 SO libspdk_thread.so.10.1 00:02:35.455 CC lib/nvme/nvme_transport.o 00:02:35.455 CC lib/nvme/nvme_discovery.o 00:02:35.455 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:35.455 SYMLINK libspdk_thread.so 00:02:35.455 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:35.714 CC lib/nvme/nvme_tcp.o 00:02:35.714 CC lib/accel/accel.o 00:02:35.971 CC lib/accel/accel_rpc.o 00:02:35.971 CC lib/nvme/nvme_opal.o 00:02:36.537 CC lib/blob/blobstore.o 00:02:36.537 CC lib/init/json_config.o 00:02:36.537 CC lib/init/subsystem.o 00:02:36.537 CC lib/init/subsystem_rpc.o 00:02:36.537 CC lib/init/rpc.o 00:02:36.795 CC lib/blob/request.o 00:02:36.795 CC lib/blob/zeroes.o 00:02:36.795 CC lib/blob/blob_bs_dev.o 00:02:36.795 CC lib/nvme/nvme_io_msg.o 00:02:37.053 LIB libspdk_init.a 00:02:37.053 SO libspdk_init.so.5.0 00:02:37.053 CC lib/virtio/virtio.o 00:02:37.053 SYMLINK libspdk_init.so 00:02:37.053 CC lib/virtio/virtio_vhost_user.o 00:02:37.053 CC lib/virtio/virtio_vfio_user.o 00:02:37.311 CC lib/virtio/virtio_pci.o 00:02:37.311 CC lib/nvme/nvme_poll_group.o 00:02:37.569 CC lib/nvme/nvme_zns.o 00:02:37.569 CC lib/accel/accel_sw.o 00:02:37.569 CC lib/nvme/nvme_stubs.o 00:02:37.828 LIB libspdk_accel.a 00:02:37.828 SO libspdk_accel.so.15.1 00:02:37.828 CC lib/event/app.o 00:02:38.086 LIB libspdk_virtio.a 00:02:38.086 SYMLINK libspdk_accel.so 00:02:38.086 CC lib/event/reactor.o 00:02:38.086 CC lib/event/log_rpc.o 00:02:38.086 SO libspdk_virtio.so.7.0 00:02:38.086 SYMLINK libspdk_virtio.so 00:02:38.086 CC lib/event/app_rpc.o 00:02:38.086 CC lib/event/scheduler_static.o 00:02:38.364 CC lib/nvme/nvme_auth.o 00:02:38.364 CC lib/bdev/bdev.o 00:02:38.364 CC lib/bdev/bdev_rpc.o 00:02:38.364 CC lib/bdev/bdev_zone.o 00:02:38.364 CC lib/bdev/part.o 00:02:38.622 CC lib/bdev/scsi_nvme.o 00:02:38.622 CC lib/nvme/nvme_cuse.o 00:02:38.622 CC lib/nvme/nvme_rdma.o 00:02:38.623 LIB libspdk_event.a 00:02:38.623 SO libspdk_event.so.14.0 00:02:38.881 SYMLINK libspdk_event.so 00:02:40.260 LIB libspdk_nvme.a 00:02:40.518 SO libspdk_nvme.so.13.1 00:02:41.084 SYMLINK libspdk_nvme.so 00:02:41.084 LIB libspdk_blob.a 00:02:41.084 SO libspdk_blob.so.11.0 00:02:41.342 SYMLINK libspdk_blob.so 00:02:41.600 CC lib/blobfs/tree.o 00:02:41.600 CC lib/blobfs/blobfs.o 00:02:41.600 CC lib/lvol/lvol.o 00:02:41.858 LIB libspdk_bdev.a 00:02:41.858 SO libspdk_bdev.so.15.1 00:02:42.116 SYMLINK libspdk_bdev.so 00:02:42.375 CC lib/nvmf/ctrlr.o 00:02:42.375 CC lib/nvmf/ctrlr_discovery.o 00:02:42.375 CC lib/nvmf/subsystem.o 00:02:42.375 CC lib/nvmf/ctrlr_bdev.o 00:02:42.375 CC lib/scsi/dev.o 00:02:42.375 CC lib/nbd/nbd.o 00:02:42.375 CC lib/ftl/ftl_core.o 00:02:42.375 CC lib/ublk/ublk.o 00:02:42.375 LIB libspdk_blobfs.a 00:02:42.375 SO libspdk_blobfs.so.10.0 00:02:42.633 SYMLINK libspdk_blobfs.so 00:02:42.633 CC lib/ublk/ublk_rpc.o 00:02:42.633 LIB libspdk_lvol.a 00:02:42.633 SO libspdk_lvol.so.10.0 00:02:42.633 CC lib/scsi/lun.o 00:02:42.633 SYMLINK libspdk_lvol.so 00:02:42.633 CC lib/ftl/ftl_init.o 00:02:42.892 CC lib/ftl/ftl_layout.o 00:02:42.892 CC lib/ftl/ftl_debug.o 00:02:42.892 CC lib/nbd/nbd_rpc.o 00:02:42.892 CC lib/scsi/port.o 00:02:43.150 CC lib/nvmf/nvmf.o 00:02:43.150 LIB libspdk_ublk.a 00:02:43.150 CC lib/ftl/ftl_io.o 00:02:43.150 SO libspdk_ublk.so.3.0 00:02:43.150 CC lib/nvmf/nvmf_rpc.o 00:02:43.408 CC lib/nvmf/transport.o 00:02:43.408 CC lib/scsi/scsi.o 00:02:43.408 LIB libspdk_nbd.a 00:02:43.408 SYMLINK libspdk_ublk.so 00:02:43.408 CC lib/nvmf/tcp.o 00:02:43.408 SO libspdk_nbd.so.7.0 00:02:43.408 CC lib/ftl/ftl_sb.o 00:02:43.408 SYMLINK libspdk_nbd.so 00:02:43.408 CC lib/nvmf/stubs.o 00:02:43.665 CC lib/scsi/scsi_bdev.o 00:02:43.665 CC lib/nvmf/mdns_server.o 00:02:43.665 CC lib/ftl/ftl_l2p.o 00:02:43.922 CC lib/scsi/scsi_pr.o 00:02:44.180 CC lib/ftl/ftl_l2p_flat.o 00:02:44.438 CC lib/ftl/ftl_nv_cache.o 00:02:44.438 CC lib/scsi/scsi_rpc.o 00:02:44.438 CC lib/nvmf/rdma.o 00:02:44.438 CC lib/nvmf/auth.o 00:02:44.438 CC lib/ftl/ftl_band.o 00:02:44.697 CC lib/scsi/task.o 00:02:44.697 CC lib/ftl/ftl_band_ops.o 00:02:44.697 CC lib/ftl/ftl_writer.o 00:02:44.697 CC lib/ftl/ftl_rq.o 00:02:44.697 CC lib/ftl/ftl_reloc.o 00:02:44.955 LIB libspdk_scsi.a 00:02:44.955 SO libspdk_scsi.so.9.0 00:02:45.213 CC lib/ftl/ftl_l2p_cache.o 00:02:45.213 CC lib/ftl/ftl_p2l.o 00:02:45.213 CC lib/ftl/mngt/ftl_mngt.o 00:02:45.213 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:45.213 SYMLINK libspdk_scsi.so 00:02:45.213 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:45.471 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:45.471 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:45.729 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:45.729 CC lib/iscsi/conn.o 00:02:45.729 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:45.729 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:45.729 CC lib/vhost/vhost.o 00:02:45.986 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:45.986 CC lib/iscsi/init_grp.o 00:02:45.986 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:45.986 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:45.986 CC lib/iscsi/iscsi.o 00:02:45.986 CC lib/iscsi/md5.o 00:02:46.250 CC lib/iscsi/param.o 00:02:46.250 CC lib/iscsi/portal_grp.o 00:02:46.250 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:46.250 CC lib/vhost/vhost_rpc.o 00:02:46.250 CC lib/iscsi/tgt_node.o 00:02:46.545 CC lib/iscsi/iscsi_subsystem.o 00:02:46.803 CC lib/iscsi/iscsi_rpc.o 00:02:46.803 CC lib/iscsi/task.o 00:02:46.803 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:46.803 CC lib/vhost/vhost_scsi.o 00:02:46.803 CC lib/vhost/vhost_blk.o 00:02:47.062 CC lib/ftl/utils/ftl_conf.o 00:02:47.062 CC lib/vhost/rte_vhost_user.o 00:02:47.062 CC lib/ftl/utils/ftl_md.o 00:02:47.062 CC lib/ftl/utils/ftl_mempool.o 00:02:47.320 CC lib/ftl/utils/ftl_bitmap.o 00:02:47.578 CC lib/ftl/utils/ftl_property.o 00:02:47.578 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:47.578 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:47.836 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:47.836 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:47.836 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:47.836 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:47.836 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:48.094 LIB libspdk_nvmf.a 00:02:48.094 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:48.094 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:48.094 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:48.352 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:48.352 CC lib/ftl/base/ftl_base_dev.o 00:02:48.352 SO libspdk_nvmf.so.19.0 00:02:48.352 CC lib/ftl/base/ftl_base_bdev.o 00:02:48.352 LIB libspdk_iscsi.a 00:02:48.352 CC lib/ftl/ftl_trace.o 00:02:48.611 SO libspdk_iscsi.so.8.0 00:02:48.611 SYMLINK libspdk_nvmf.so 00:02:48.868 SYMLINK libspdk_iscsi.so 00:02:48.868 LIB libspdk_ftl.a 00:02:48.868 LIB libspdk_vhost.a 00:02:49.125 SO libspdk_vhost.so.8.0 00:02:49.125 SO libspdk_ftl.so.9.0 00:02:49.125 SYMLINK libspdk_vhost.so 00:02:49.687 SYMLINK libspdk_ftl.so 00:02:49.944 CC module/env_dpdk/env_dpdk_rpc.o 00:02:50.202 CC module/sock/posix/posix.o 00:02:50.202 CC module/accel/ioat/accel_ioat.o 00:02:50.202 CC module/keyring/linux/keyring.o 00:02:50.202 CC module/accel/dsa/accel_dsa.o 00:02:50.202 CC module/keyring/file/keyring.o 00:02:50.202 CC module/accel/error/accel_error.o 00:02:50.202 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:50.202 CC module/blob/bdev/blob_bdev.o 00:02:50.202 CC module/accel/iaa/accel_iaa.o 00:02:50.202 LIB libspdk_env_dpdk_rpc.a 00:02:50.202 SO libspdk_env_dpdk_rpc.so.6.0 00:02:50.458 CC module/keyring/linux/keyring_rpc.o 00:02:50.458 SYMLINK libspdk_env_dpdk_rpc.so 00:02:50.459 CC module/accel/error/accel_error_rpc.o 00:02:50.459 CC module/accel/iaa/accel_iaa_rpc.o 00:02:50.459 CC module/keyring/file/keyring_rpc.o 00:02:50.459 CC module/accel/dsa/accel_dsa_rpc.o 00:02:50.459 CC module/accel/ioat/accel_ioat_rpc.o 00:02:50.459 LIB libspdk_scheduler_dynamic.a 00:02:50.459 SO libspdk_scheduler_dynamic.so.4.0 00:02:50.459 LIB libspdk_accel_iaa.a 00:02:50.761 LIB libspdk_blob_bdev.a 00:02:50.761 LIB libspdk_accel_dsa.a 00:02:50.761 LIB libspdk_keyring_linux.a 00:02:50.761 SO libspdk_blob_bdev.so.11.0 00:02:50.761 SO libspdk_accel_iaa.so.3.0 00:02:50.761 SYMLINK libspdk_scheduler_dynamic.so 00:02:50.761 LIB libspdk_keyring_file.a 00:02:50.761 LIB libspdk_accel_error.a 00:02:50.761 SO libspdk_keyring_linux.so.1.0 00:02:50.761 SO libspdk_accel_dsa.so.5.0 00:02:50.762 SO libspdk_keyring_file.so.1.0 00:02:50.762 LIB libspdk_accel_ioat.a 00:02:50.762 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:50.762 SO libspdk_accel_error.so.2.0 00:02:50.762 SYMLINK libspdk_blob_bdev.so 00:02:50.762 SO libspdk_accel_ioat.so.6.0 00:02:50.762 SYMLINK libspdk_accel_iaa.so 00:02:50.762 SYMLINK libspdk_keyring_linux.so 00:02:50.762 SYMLINK libspdk_accel_dsa.so 00:02:50.762 SYMLINK libspdk_accel_error.so 00:02:50.762 SYMLINK libspdk_keyring_file.so 00:02:50.762 SYMLINK libspdk_accel_ioat.so 00:02:50.762 CC module/scheduler/gscheduler/gscheduler.o 00:02:51.018 LIB libspdk_scheduler_dpdk_governor.a 00:02:51.018 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:51.018 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:51.274 CC module/bdev/delay/vbdev_delay.o 00:02:51.274 CC module/bdev/malloc/bdev_malloc.o 00:02:51.274 CC module/bdev/gpt/gpt.o 00:02:51.274 LIB libspdk_sock_posix.a 00:02:51.274 CC module/bdev/null/bdev_null.o 00:02:51.274 CC module/bdev/lvol/vbdev_lvol.o 00:02:51.274 CC module/bdev/error/vbdev_error.o 00:02:51.274 CC module/blobfs/bdev/blobfs_bdev.o 00:02:51.274 LIB libspdk_scheduler_gscheduler.a 00:02:51.274 SO libspdk_sock_posix.so.6.0 00:02:51.274 SO libspdk_scheduler_gscheduler.so.4.0 00:02:51.274 SYMLINK libspdk_scheduler_gscheduler.so 00:02:51.274 CC module/bdev/error/vbdev_error_rpc.o 00:02:51.274 CC module/bdev/nvme/bdev_nvme.o 00:02:51.274 SYMLINK libspdk_sock_posix.so 00:02:51.274 CC module/bdev/null/bdev_null_rpc.o 00:02:51.531 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:51.531 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:51.531 CC module/bdev/gpt/vbdev_gpt.o 00:02:51.531 LIB libspdk_bdev_error.a 00:02:51.531 SO libspdk_bdev_error.so.6.0 00:02:51.788 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:51.788 LIB libspdk_bdev_null.a 00:02:51.788 SYMLINK libspdk_bdev_error.so 00:02:51.788 LIB libspdk_bdev_malloc.a 00:02:51.788 SO libspdk_bdev_null.so.6.0 00:02:51.788 SO libspdk_bdev_malloc.so.6.0 00:02:51.788 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:51.788 LIB libspdk_blobfs_bdev.a 00:02:51.788 SYMLINK libspdk_bdev_malloc.so 00:02:51.788 SYMLINK libspdk_bdev_null.so 00:02:51.788 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:51.788 SO libspdk_blobfs_bdev.so.6.0 00:02:51.788 LIB libspdk_bdev_delay.a 00:02:52.046 SO libspdk_bdev_delay.so.6.0 00:02:52.046 CC module/bdev/passthru/vbdev_passthru.o 00:02:52.046 CC module/bdev/raid/bdev_raid.o 00:02:52.046 LIB libspdk_bdev_gpt.a 00:02:52.046 SYMLINK libspdk_blobfs_bdev.so 00:02:52.046 CC module/bdev/raid/bdev_raid_rpc.o 00:02:52.046 SO libspdk_bdev_gpt.so.6.0 00:02:52.046 CC module/bdev/raid/bdev_raid_sb.o 00:02:52.046 SYMLINK libspdk_bdev_delay.so 00:02:52.046 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:52.046 CC module/bdev/split/vbdev_split.o 00:02:52.046 SYMLINK libspdk_bdev_gpt.so 00:02:52.046 CC module/bdev/split/vbdev_split_rpc.o 00:02:52.302 LIB libspdk_bdev_lvol.a 00:02:52.302 SO libspdk_bdev_lvol.so.6.0 00:02:52.302 CC module/bdev/nvme/nvme_rpc.o 00:02:52.302 SYMLINK libspdk_bdev_lvol.so 00:02:52.302 CC module/bdev/nvme/bdev_mdns_client.o 00:02:52.302 CC module/bdev/nvme/vbdev_opal.o 00:02:52.559 LIB libspdk_bdev_passthru.a 00:02:52.559 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:52.559 LIB libspdk_bdev_split.a 00:02:52.559 SO libspdk_bdev_passthru.so.6.0 00:02:52.559 SO libspdk_bdev_split.so.6.0 00:02:52.559 SYMLINK libspdk_bdev_passthru.so 00:02:52.559 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:52.816 SYMLINK libspdk_bdev_split.so 00:02:52.816 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:52.816 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:52.816 CC module/bdev/raid/raid0.o 00:02:52.816 CC module/bdev/raid/raid1.o 00:02:53.074 CC module/bdev/aio/bdev_aio.o 00:02:53.074 CC module/bdev/aio/bdev_aio_rpc.o 00:02:53.074 CC module/bdev/ftl/bdev_ftl.o 00:02:53.074 CC module/bdev/iscsi/bdev_iscsi.o 00:02:53.074 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:53.075 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:53.075 CC module/bdev/raid/concat.o 00:02:53.338 LIB libspdk_bdev_zone_block.a 00:02:53.596 LIB libspdk_bdev_raid.a 00:02:53.596 SO libspdk_bdev_zone_block.so.6.0 00:02:53.596 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:53.596 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:53.596 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:53.596 LIB libspdk_bdev_ftl.a 00:02:53.596 LIB libspdk_bdev_aio.a 00:02:53.596 SO libspdk_bdev_raid.so.6.0 00:02:53.596 SO libspdk_bdev_ftl.so.6.0 00:02:53.596 SYMLINK libspdk_bdev_zone_block.so 00:02:53.596 SO libspdk_bdev_aio.so.6.0 00:02:53.596 SYMLINK libspdk_bdev_ftl.so 00:02:53.596 SYMLINK libspdk_bdev_raid.so 00:02:53.596 SYMLINK libspdk_bdev_aio.so 00:02:53.854 LIB libspdk_bdev_iscsi.a 00:02:53.854 SO libspdk_bdev_iscsi.so.6.0 00:02:53.854 SYMLINK libspdk_bdev_iscsi.so 00:02:54.418 LIB libspdk_bdev_virtio.a 00:02:54.418 SO libspdk_bdev_virtio.so.6.0 00:02:54.418 SYMLINK libspdk_bdev_virtio.so 00:02:54.675 LIB libspdk_bdev_nvme.a 00:02:54.675 SO libspdk_bdev_nvme.so.7.0 00:02:54.675 SYMLINK libspdk_bdev_nvme.so 00:02:55.239 CC module/event/subsystems/keyring/keyring.o 00:02:55.239 CC module/event/subsystems/iobuf/iobuf.o 00:02:55.239 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:55.239 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:55.239 CC module/event/subsystems/vmd/vmd.o 00:02:55.239 CC module/event/subsystems/scheduler/scheduler.o 00:02:55.239 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:55.239 CC module/event/subsystems/sock/sock.o 00:02:55.497 LIB libspdk_event_vhost_blk.a 00:02:55.497 LIB libspdk_event_scheduler.a 00:02:55.497 SO libspdk_event_vhost_blk.so.3.0 00:02:55.497 LIB libspdk_event_keyring.a 00:02:55.497 LIB libspdk_event_sock.a 00:02:55.497 LIB libspdk_event_vmd.a 00:02:55.497 SO libspdk_event_scheduler.so.4.0 00:02:55.497 SO libspdk_event_keyring.so.1.0 00:02:55.497 LIB libspdk_event_iobuf.a 00:02:55.497 SYMLINK libspdk_event_vhost_blk.so 00:02:55.497 SO libspdk_event_sock.so.5.0 00:02:55.497 SO libspdk_event_vmd.so.6.0 00:02:55.497 SYMLINK libspdk_event_scheduler.so 00:02:55.497 SO libspdk_event_iobuf.so.3.0 00:02:55.497 SYMLINK libspdk_event_sock.so 00:02:55.497 SYMLINK libspdk_event_keyring.so 00:02:55.497 SYMLINK libspdk_event_vmd.so 00:02:55.755 SYMLINK libspdk_event_iobuf.so 00:02:56.014 CC module/event/subsystems/accel/accel.o 00:02:56.014 LIB libspdk_event_accel.a 00:02:56.014 SO libspdk_event_accel.so.6.0 00:02:56.271 SYMLINK libspdk_event_accel.so 00:02:56.529 CC module/event/subsystems/bdev/bdev.o 00:02:56.529 LIB libspdk_event_bdev.a 00:02:56.786 SO libspdk_event_bdev.so.6.0 00:02:56.786 SYMLINK libspdk_event_bdev.so 00:02:57.055 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:57.055 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:57.055 CC module/event/subsystems/nbd/nbd.o 00:02:57.055 CC module/event/subsystems/ublk/ublk.o 00:02:57.055 CC module/event/subsystems/scsi/scsi.o 00:02:57.055 LIB libspdk_event_ublk.a 00:02:57.055 LIB libspdk_event_nbd.a 00:02:57.055 SO libspdk_event_ublk.so.3.0 00:02:57.313 SO libspdk_event_nbd.so.6.0 00:02:57.313 SYMLINK libspdk_event_ublk.so 00:02:57.313 LIB libspdk_event_scsi.a 00:02:57.313 SYMLINK libspdk_event_nbd.so 00:02:57.313 LIB libspdk_event_nvmf.a 00:02:57.313 SO libspdk_event_scsi.so.6.0 00:02:57.313 SO libspdk_event_nvmf.so.6.0 00:02:57.313 SYMLINK libspdk_event_scsi.so 00:02:57.313 SYMLINK libspdk_event_nvmf.so 00:02:57.571 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:57.571 CC module/event/subsystems/iscsi/iscsi.o 00:02:57.829 LIB libspdk_event_vhost_scsi.a 00:02:57.829 SO libspdk_event_vhost_scsi.so.3.0 00:02:57.829 LIB libspdk_event_iscsi.a 00:02:57.829 SO libspdk_event_iscsi.so.6.0 00:02:57.829 SYMLINK libspdk_event_vhost_scsi.so 00:02:57.829 SYMLINK libspdk_event_iscsi.so 00:02:58.087 SO libspdk.so.6.0 00:02:58.087 SYMLINK libspdk.so 00:02:58.345 CXX app/trace/trace.o 00:02:58.345 CC app/spdk_lspci/spdk_lspci.o 00:02:58.345 CC app/trace_record/trace_record.o 00:02:58.345 CC app/iscsi_tgt/iscsi_tgt.o 00:02:58.345 CC app/nvmf_tgt/nvmf_main.o 00:02:58.345 CC app/spdk_tgt/spdk_tgt.o 00:02:58.345 CC test/thread/poller_perf/poller_perf.o 00:02:58.345 CC examples/util/zipf/zipf.o 00:02:58.603 CC test/app/bdev_svc/bdev_svc.o 00:02:58.603 CC test/dma/test_dma/test_dma.o 00:02:58.603 LINK spdk_lspci 00:02:58.603 LINK spdk_trace_record 00:02:58.603 LINK poller_perf 00:02:58.603 LINK nvmf_tgt 00:02:58.603 LINK iscsi_tgt 00:02:58.862 LINK zipf 00:02:58.862 LINK spdk_tgt 00:02:58.862 LINK bdev_svc 00:02:58.862 LINK spdk_trace 00:02:59.120 CC app/spdk_nvme_perf/perf.o 00:02:59.120 LINK test_dma 00:02:59.120 CC test/app/histogram_perf/histogram_perf.o 00:02:59.120 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:59.120 CC examples/ioat/perf/perf.o 00:02:59.120 CC examples/ioat/verify/verify.o 00:02:59.120 LINK histogram_perf 00:02:59.378 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:59.378 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:59.378 CC app/spdk_nvme_identify/identify.o 00:02:59.378 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:59.378 LINK ioat_perf 00:02:59.642 LINK nvme_fuzz 00:02:59.642 CC test/app/jsoncat/jsoncat.o 00:02:59.642 TEST_HEADER include/spdk/accel.h 00:02:59.642 TEST_HEADER include/spdk/accel_module.h 00:02:59.642 TEST_HEADER include/spdk/assert.h 00:02:59.642 TEST_HEADER include/spdk/barrier.h 00:02:59.642 TEST_HEADER include/spdk/base64.h 00:02:59.642 TEST_HEADER include/spdk/bdev.h 00:02:59.642 TEST_HEADER include/spdk/bdev_module.h 00:02:59.642 TEST_HEADER include/spdk/bdev_zone.h 00:02:59.642 TEST_HEADER include/spdk/bit_array.h 00:02:59.642 TEST_HEADER include/spdk/bit_pool.h 00:02:59.642 TEST_HEADER include/spdk/blob_bdev.h 00:02:59.642 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:59.642 TEST_HEADER include/spdk/blobfs.h 00:02:59.642 TEST_HEADER include/spdk/blob.h 00:02:59.642 TEST_HEADER include/spdk/conf.h 00:02:59.642 TEST_HEADER include/spdk/config.h 00:02:59.642 TEST_HEADER include/spdk/cpuset.h 00:02:59.642 TEST_HEADER include/spdk/crc16.h 00:02:59.642 TEST_HEADER include/spdk/crc32.h 00:02:59.642 TEST_HEADER include/spdk/crc64.h 00:02:59.643 TEST_HEADER include/spdk/dif.h 00:02:59.643 TEST_HEADER include/spdk/dma.h 00:02:59.643 TEST_HEADER include/spdk/endian.h 00:02:59.643 TEST_HEADER include/spdk/env_dpdk.h 00:02:59.643 TEST_HEADER include/spdk/env.h 00:02:59.643 TEST_HEADER include/spdk/event.h 00:02:59.643 TEST_HEADER include/spdk/fd_group.h 00:02:59.643 TEST_HEADER include/spdk/fd.h 00:02:59.643 TEST_HEADER include/spdk/file.h 00:02:59.643 LINK verify 00:02:59.643 TEST_HEADER include/spdk/ftl.h 00:02:59.643 TEST_HEADER include/spdk/gpt_spec.h 00:02:59.643 TEST_HEADER include/spdk/hexlify.h 00:02:59.643 TEST_HEADER include/spdk/histogram_data.h 00:02:59.643 TEST_HEADER include/spdk/idxd.h 00:02:59.643 TEST_HEADER include/spdk/idxd_spec.h 00:02:59.643 TEST_HEADER include/spdk/init.h 00:02:59.643 TEST_HEADER include/spdk/ioat.h 00:02:59.643 TEST_HEADER include/spdk/ioat_spec.h 00:02:59.643 TEST_HEADER include/spdk/iscsi_spec.h 00:02:59.643 TEST_HEADER include/spdk/json.h 00:02:59.643 TEST_HEADER include/spdk/jsonrpc.h 00:02:59.643 TEST_HEADER include/spdk/keyring.h 00:02:59.643 TEST_HEADER include/spdk/keyring_module.h 00:02:59.643 TEST_HEADER include/spdk/likely.h 00:02:59.643 TEST_HEADER include/spdk/log.h 00:02:59.643 TEST_HEADER include/spdk/lvol.h 00:02:59.643 TEST_HEADER include/spdk/memory.h 00:02:59.643 TEST_HEADER include/spdk/mmio.h 00:02:59.643 TEST_HEADER include/spdk/nbd.h 00:02:59.643 TEST_HEADER include/spdk/net.h 00:02:59.643 TEST_HEADER include/spdk/notify.h 00:02:59.643 TEST_HEADER include/spdk/nvme.h 00:02:59.643 TEST_HEADER include/spdk/nvme_intel.h 00:02:59.643 LINK jsoncat 00:02:59.643 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:59.643 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:59.643 TEST_HEADER include/spdk/nvme_spec.h 00:02:59.643 TEST_HEADER include/spdk/nvme_zns.h 00:02:59.643 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:59.643 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:59.643 TEST_HEADER include/spdk/nvmf.h 00:02:59.643 TEST_HEADER include/spdk/nvmf_spec.h 00:02:59.643 TEST_HEADER include/spdk/nvmf_transport.h 00:02:59.643 TEST_HEADER include/spdk/opal.h 00:02:59.643 TEST_HEADER include/spdk/opal_spec.h 00:02:59.643 CC examples/vmd/lsvmd/lsvmd.o 00:02:59.643 TEST_HEADER include/spdk/pci_ids.h 00:02:59.643 TEST_HEADER include/spdk/pipe.h 00:02:59.643 TEST_HEADER include/spdk/queue.h 00:02:59.643 TEST_HEADER include/spdk/reduce.h 00:02:59.643 TEST_HEADER include/spdk/rpc.h 00:02:59.643 TEST_HEADER include/spdk/scheduler.h 00:02:59.643 TEST_HEADER include/spdk/scsi.h 00:02:59.643 TEST_HEADER include/spdk/scsi_spec.h 00:02:59.643 TEST_HEADER include/spdk/sock.h 00:02:59.643 TEST_HEADER include/spdk/stdinc.h 00:02:59.643 TEST_HEADER include/spdk/string.h 00:02:59.643 TEST_HEADER include/spdk/thread.h 00:02:59.643 TEST_HEADER include/spdk/trace.h 00:02:59.643 TEST_HEADER include/spdk/trace_parser.h 00:02:59.643 TEST_HEADER include/spdk/tree.h 00:02:59.643 TEST_HEADER include/spdk/ublk.h 00:02:59.643 TEST_HEADER include/spdk/util.h 00:02:59.643 TEST_HEADER include/spdk/uuid.h 00:02:59.643 TEST_HEADER include/spdk/version.h 00:02:59.911 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:59.911 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:59.911 TEST_HEADER include/spdk/vhost.h 00:02:59.911 TEST_HEADER include/spdk/vmd.h 00:02:59.911 TEST_HEADER include/spdk/xor.h 00:02:59.911 TEST_HEADER include/spdk/zipf.h 00:02:59.911 CXX test/cpp_headers/accel.o 00:02:59.911 CC examples/idxd/perf/perf.o 00:02:59.911 CC app/spdk_nvme_discover/discovery_aer.o 00:02:59.911 LINK lsvmd 00:02:59.911 CC app/spdk_top/spdk_top.o 00:02:59.911 CXX test/cpp_headers/accel_module.o 00:03:00.169 LINK vhost_fuzz 00:03:00.169 LINK spdk_nvme_discover 00:03:00.169 CXX test/cpp_headers/assert.o 00:03:00.169 LINK idxd_perf 00:03:00.169 CC examples/vmd/led/led.o 00:03:00.427 LINK spdk_nvme_perf 00:03:00.427 CXX test/cpp_headers/barrier.o 00:03:00.427 LINK led 00:03:00.427 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:00.684 CXX test/cpp_headers/base64.o 00:03:00.684 CC test/env/mem_callbacks/mem_callbacks.o 00:03:00.684 LINK spdk_nvme_identify 00:03:00.684 CC test/env/vtophys/vtophys.o 00:03:00.684 CXX test/cpp_headers/bdev.o 00:03:00.684 CC examples/thread/thread/thread_ex.o 00:03:00.942 LINK interrupt_tgt 00:03:00.942 LINK vtophys 00:03:00.943 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:01.200 CXX test/cpp_headers/bdev_module.o 00:03:01.200 LINK thread 00:03:01.200 CC examples/sock/hello_world/hello_sock.o 00:03:01.200 LINK env_dpdk_post_init 00:03:01.200 CC app/vhost/vhost.o 00:03:01.458 CC app/spdk_dd/spdk_dd.o 00:03:01.458 CXX test/cpp_headers/bdev_zone.o 00:03:01.458 LINK spdk_top 00:03:01.458 LINK iscsi_fuzz 00:03:01.715 LINK vhost 00:03:01.715 CXX test/cpp_headers/bit_array.o 00:03:01.715 LINK hello_sock 00:03:01.715 CXX test/cpp_headers/bit_pool.o 00:03:01.715 CC test/app/stub/stub.o 00:03:01.715 LINK mem_callbacks 00:03:01.973 CC app/fio/nvme/fio_plugin.o 00:03:01.973 CXX test/cpp_headers/blob_bdev.o 00:03:01.973 CXX test/cpp_headers/blobfs_bdev.o 00:03:01.973 CXX test/cpp_headers/blobfs.o 00:03:01.973 LINK spdk_dd 00:03:01.973 LINK stub 00:03:01.973 CC test/env/memory/memory_ut.o 00:03:01.973 CC app/fio/bdev/fio_plugin.o 00:03:02.230 CXX test/cpp_headers/blob.o 00:03:02.230 CC examples/accel/perf/accel_perf.o 00:03:02.230 CC test/env/pci/pci_ut.o 00:03:02.230 CXX test/cpp_headers/conf.o 00:03:02.230 CC examples/nvme/hello_world/hello_world.o 00:03:02.488 CC examples/blob/hello_world/hello_blob.o 00:03:02.488 CC test/event/event_perf/event_perf.o 00:03:02.488 CXX test/cpp_headers/config.o 00:03:02.488 LINK spdk_nvme 00:03:02.488 LINK hello_world 00:03:02.488 CXX test/cpp_headers/cpuset.o 00:03:02.746 LINK event_perf 00:03:02.746 LINK spdk_bdev 00:03:02.746 LINK hello_blob 00:03:02.746 LINK pci_ut 00:03:02.746 LINK accel_perf 00:03:02.746 CXX test/cpp_headers/crc16.o 00:03:02.746 CC test/event/reactor/reactor.o 00:03:03.005 CC examples/nvme/reconnect/reconnect.o 00:03:03.005 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:03.005 CXX test/cpp_headers/crc32.o 00:03:03.005 LINK reactor 00:03:03.005 CC test/rpc_client/rpc_client_test.o 00:03:03.263 CC test/nvme/aer/aer.o 00:03:03.263 CC examples/blob/cli/blobcli.o 00:03:03.263 CXX test/cpp_headers/crc64.o 00:03:03.263 LINK rpc_client_test 00:03:03.263 LINK memory_ut 00:03:03.263 LINK reconnect 00:03:03.521 CC examples/bdev/hello_world/hello_bdev.o 00:03:03.521 CC test/event/reactor_perf/reactor_perf.o 00:03:03.521 CXX test/cpp_headers/dif.o 00:03:03.521 LINK aer 00:03:03.521 CXX test/cpp_headers/dma.o 00:03:03.521 LINK nvme_manage 00:03:03.779 LINK reactor_perf 00:03:03.779 CXX test/cpp_headers/endian.o 00:03:03.779 LINK hello_bdev 00:03:03.779 CC examples/bdev/bdevperf/bdevperf.o 00:03:03.779 LINK blobcli 00:03:03.779 CC test/nvme/reset/reset.o 00:03:03.779 CC test/nvme/sgl/sgl.o 00:03:03.779 CXX test/cpp_headers/env_dpdk.o 00:03:04.036 CC examples/nvme/arbitration/arbitration.o 00:03:04.036 CC test/event/app_repeat/app_repeat.o 00:03:04.036 CC test/accel/dif/dif.o 00:03:04.036 CXX test/cpp_headers/env.o 00:03:04.294 LINK reset 00:03:04.294 CC test/event/scheduler/scheduler.o 00:03:04.294 LINK app_repeat 00:03:04.294 LINK sgl 00:03:04.294 CXX test/cpp_headers/event.o 00:03:04.294 CC examples/nvme/hotplug/hotplug.o 00:03:04.553 LINK arbitration 00:03:04.553 CXX test/cpp_headers/fd_group.o 00:03:04.553 CXX test/cpp_headers/fd.o 00:03:04.553 LINK scheduler 00:03:04.553 LINK dif 00:03:04.811 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:04.811 CC test/nvme/e2edp/nvme_dp.o 00:03:04.811 CXX test/cpp_headers/file.o 00:03:04.812 LINK hotplug 00:03:05.070 CXX test/cpp_headers/ftl.o 00:03:05.070 CC examples/nvme/abort/abort.o 00:03:05.070 CXX test/cpp_headers/gpt_spec.o 00:03:05.070 LINK cmb_copy 00:03:05.070 CXX test/cpp_headers/hexlify.o 00:03:05.328 LINK bdevperf 00:03:05.328 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:05.328 CC test/nvme/overhead/overhead.o 00:03:05.328 LINK nvme_dp 00:03:05.586 CC test/nvme/err_injection/err_injection.o 00:03:05.586 CXX test/cpp_headers/histogram_data.o 00:03:05.586 CXX test/cpp_headers/idxd.o 00:03:05.586 LINK abort 00:03:05.586 CXX test/cpp_headers/idxd_spec.o 00:03:05.586 CXX test/cpp_headers/init.o 00:03:05.586 LINK pmr_persistence 00:03:05.844 CXX test/cpp_headers/ioat.o 00:03:05.844 LINK err_injection 00:03:05.844 LINK overhead 00:03:05.844 CXX test/cpp_headers/ioat_spec.o 00:03:06.102 CC test/nvme/startup/startup.o 00:03:06.102 CXX test/cpp_headers/iscsi_spec.o 00:03:06.102 CC test/nvme/reserve/reserve.o 00:03:06.102 CC test/nvme/connect_stress/connect_stress.o 00:03:06.103 CC test/nvme/simple_copy/simple_copy.o 00:03:06.103 CC test/nvme/boot_partition/boot_partition.o 00:03:06.103 CC test/nvme/compliance/nvme_compliance.o 00:03:06.360 CC test/blobfs/mkfs/mkfs.o 00:03:06.360 CXX test/cpp_headers/json.o 00:03:06.360 LINK reserve 00:03:06.360 LINK startup 00:03:06.360 LINK connect_stress 00:03:06.360 LINK boot_partition 00:03:06.618 LINK simple_copy 00:03:06.618 CXX test/cpp_headers/jsonrpc.o 00:03:06.618 LINK nvme_compliance 00:03:06.618 LINK mkfs 00:03:06.618 CC test/lvol/esnap/esnap.o 00:03:06.618 CC test/nvme/fused_ordering/fused_ordering.o 00:03:06.618 CXX test/cpp_headers/keyring.o 00:03:06.876 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:06.876 CXX test/cpp_headers/keyring_module.o 00:03:06.876 CC test/bdev/bdevio/bdevio.o 00:03:06.876 CXX test/cpp_headers/likely.o 00:03:06.876 CC examples/nvmf/nvmf/nvmf.o 00:03:06.876 CXX test/cpp_headers/log.o 00:03:06.876 LINK fused_ordering 00:03:06.876 CC test/nvme/fdp/fdp.o 00:03:07.134 LINK doorbell_aers 00:03:07.134 CXX test/cpp_headers/lvol.o 00:03:07.134 CXX test/cpp_headers/memory.o 00:03:07.134 CXX test/cpp_headers/mmio.o 00:03:07.134 CC test/nvme/cuse/cuse.o 00:03:07.134 CXX test/cpp_headers/nbd.o 00:03:07.134 CXX test/cpp_headers/net.o 00:03:07.134 CXX test/cpp_headers/notify.o 00:03:07.392 CXX test/cpp_headers/nvme.o 00:03:07.392 LINK nvmf 00:03:07.392 LINK fdp 00:03:07.392 LINK bdevio 00:03:07.392 CXX test/cpp_headers/nvme_intel.o 00:03:07.392 CXX test/cpp_headers/nvme_ocssd.o 00:03:07.392 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:07.392 CXX test/cpp_headers/nvme_spec.o 00:03:07.650 CXX test/cpp_headers/nvme_zns.o 00:03:07.650 CXX test/cpp_headers/nvmf_cmd.o 00:03:07.650 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:07.650 CXX test/cpp_headers/nvmf.o 00:03:07.650 CXX test/cpp_headers/nvmf_spec.o 00:03:07.650 CXX test/cpp_headers/nvmf_transport.o 00:03:07.907 CXX test/cpp_headers/opal.o 00:03:07.907 CXX test/cpp_headers/opal_spec.o 00:03:07.907 CXX test/cpp_headers/pci_ids.o 00:03:07.907 CXX test/cpp_headers/pipe.o 00:03:07.907 CXX test/cpp_headers/queue.o 00:03:07.907 CXX test/cpp_headers/reduce.o 00:03:07.907 CXX test/cpp_headers/rpc.o 00:03:07.907 CXX test/cpp_headers/scheduler.o 00:03:08.164 CXX test/cpp_headers/scsi.o 00:03:08.164 CXX test/cpp_headers/scsi_spec.o 00:03:08.164 CXX test/cpp_headers/sock.o 00:03:08.164 CXX test/cpp_headers/stdinc.o 00:03:08.164 CXX test/cpp_headers/string.o 00:03:08.164 CXX test/cpp_headers/thread.o 00:03:08.164 CXX test/cpp_headers/trace.o 00:03:08.164 CXX test/cpp_headers/trace_parser.o 00:03:08.164 CXX test/cpp_headers/tree.o 00:03:08.164 CXX test/cpp_headers/ublk.o 00:03:08.420 CXX test/cpp_headers/util.o 00:03:08.420 CXX test/cpp_headers/uuid.o 00:03:08.420 CXX test/cpp_headers/version.o 00:03:08.420 CXX test/cpp_headers/vfio_user_pci.o 00:03:08.420 CXX test/cpp_headers/vfio_user_spec.o 00:03:08.420 CXX test/cpp_headers/vhost.o 00:03:08.420 CXX test/cpp_headers/vmd.o 00:03:08.420 CXX test/cpp_headers/xor.o 00:03:08.420 CXX test/cpp_headers/zipf.o 00:03:08.676 LINK cuse 00:03:12.890 LINK esnap 00:03:12.890 ************************************ 00:03:12.890 END TEST make 00:03:12.890 ************************************ 00:03:12.890 00:03:12.890 real 1m19.750s 00:03:12.890 user 8m21.255s 00:03:12.890 sys 1m48.672s 00:03:12.890 19:19:02 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:12.890 19:19:02 make -- common/autotest_common.sh@10 -- $ set +x 00:03:12.890 19:19:02 -- common/autotest_common.sh@1142 -- $ return 0 00:03:12.890 19:19:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:12.890 19:19:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:12.890 19:19:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:12.890 19:19:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.890 19:19:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:12.890 19:19:02 -- pm/common@44 -- $ pid=5201 00:03:12.890 19:19:02 -- pm/common@50 -- $ kill -TERM 5201 00:03:12.890 19:19:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.890 19:19:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:12.890 19:19:02 -- pm/common@44 -- $ pid=5203 00:03:12.890 19:19:02 -- pm/common@50 -- $ kill -TERM 5203 00:03:12.890 19:19:02 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:12.890 19:19:02 -- nvmf/common.sh@7 -- # uname -s 00:03:12.890 19:19:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:12.890 19:19:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:12.890 19:19:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:12.890 19:19:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:12.890 19:19:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:12.890 19:19:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:12.890 19:19:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:12.890 19:19:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:12.890 19:19:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:12.890 19:19:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:12.891 19:19:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:03:12.891 19:19:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:03:12.891 19:19:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:12.891 19:19:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:12.891 19:19:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:12.891 19:19:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:12.891 19:19:02 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:12.891 19:19:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:12.891 19:19:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:12.891 19:19:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:12.891 19:19:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.891 19:19:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.891 19:19:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.891 19:19:02 -- paths/export.sh@5 -- # export PATH 00:03:12.891 19:19:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.891 19:19:02 -- nvmf/common.sh@47 -- # : 0 00:03:12.891 19:19:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:12.891 19:19:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:12.891 19:19:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:12.891 19:19:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:12.891 19:19:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:12.891 19:19:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:12.891 19:19:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:12.891 19:19:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:12.891 19:19:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:12.891 19:19:02 -- spdk/autotest.sh@32 -- # uname -s 00:03:12.891 19:19:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:12.891 19:19:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:12.891 19:19:02 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:12.891 19:19:02 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:12.891 19:19:02 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:12.891 19:19:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:12.891 19:19:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:12.891 19:19:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:12.891 19:19:02 -- spdk/autotest.sh@48 -- # udevadm_pid=54704 00:03:12.891 19:19:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:12.891 19:19:02 -- pm/common@17 -- # local monitor 00:03:12.891 19:19:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.891 19:19:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.891 19:19:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:12.891 19:19:02 -- pm/common@25 -- # sleep 1 00:03:12.891 19:19:02 -- pm/common@21 -- # date +%s 00:03:12.891 19:19:02 -- pm/common@21 -- # date +%s 00:03:12.891 19:19:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721071142 00:03:12.891 19:19:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721071142 00:03:12.891 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721071142_collect-cpu-load.pm.log 00:03:12.891 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721071142_collect-vmstat.pm.log 00:03:13.824 19:19:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:13.824 19:19:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:13.824 19:19:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:13.824 19:19:03 -- common/autotest_common.sh@10 -- # set +x 00:03:13.824 19:19:03 -- spdk/autotest.sh@59 -- # create_test_list 00:03:13.825 19:19:03 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:13.825 19:19:03 -- common/autotest_common.sh@10 -- # set +x 00:03:13.825 19:19:03 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:13.825 19:19:03 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:13.825 19:19:03 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:13.825 19:19:03 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:13.825 19:19:03 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:13.825 19:19:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:13.825 19:19:03 -- common/autotest_common.sh@1455 -- # uname 00:03:13.825 19:19:03 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:13.825 19:19:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:13.825 19:19:03 -- common/autotest_common.sh@1475 -- # uname 00:03:13.825 19:19:03 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:13.825 19:19:03 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:13.825 19:19:03 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:13.825 19:19:03 -- spdk/autotest.sh@72 -- # hash lcov 00:03:13.825 19:19:03 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:13.825 19:19:03 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:13.825 --rc lcov_branch_coverage=1 00:03:13.825 --rc lcov_function_coverage=1 00:03:13.825 --rc genhtml_branch_coverage=1 00:03:13.825 --rc genhtml_function_coverage=1 00:03:13.825 --rc genhtml_legend=1 00:03:13.825 --rc geninfo_all_blocks=1 00:03:13.825 ' 00:03:13.825 19:19:03 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:13.825 --rc lcov_branch_coverage=1 00:03:13.825 --rc lcov_function_coverage=1 00:03:13.825 --rc genhtml_branch_coverage=1 00:03:13.825 --rc genhtml_function_coverage=1 00:03:13.825 --rc genhtml_legend=1 00:03:13.825 --rc geninfo_all_blocks=1 00:03:13.825 ' 00:03:13.825 19:19:03 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:13.825 --rc lcov_branch_coverage=1 00:03:13.825 --rc lcov_function_coverage=1 00:03:13.825 --rc genhtml_branch_coverage=1 00:03:13.825 --rc genhtml_function_coverage=1 00:03:13.825 --rc genhtml_legend=1 00:03:13.825 --rc geninfo_all_blocks=1 00:03:13.825 --no-external' 00:03:13.825 19:19:03 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:13.825 --rc lcov_branch_coverage=1 00:03:13.825 --rc lcov_function_coverage=1 00:03:13.825 --rc genhtml_branch_coverage=1 00:03:13.825 --rc genhtml_function_coverage=1 00:03:13.825 --rc genhtml_legend=1 00:03:13.825 --rc geninfo_all_blocks=1 00:03:13.825 --no-external' 00:03:13.825 19:19:03 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:13.825 lcov: LCOV version 1.14 00:03:13.825 19:19:03 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:31.962 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:31.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:44.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:44.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:44.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:44.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:44.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:44.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:44.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:44.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:44.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:44.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:44.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:44.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:44.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:44.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:44.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:44.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:44.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:47.481 19:19:37 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:47.481 19:19:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:47.481 19:19:37 -- common/autotest_common.sh@10 -- # set +x 00:03:47.481 19:19:37 -- spdk/autotest.sh@91 -- # rm -f 00:03:47.481 19:19:37 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:48.416 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.416 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:48.416 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:48.416 19:19:37 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:48.416 19:19:37 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:48.416 19:19:37 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:48.416 19:19:37 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:48.416 19:19:37 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:48.416 19:19:37 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:48.416 19:19:37 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:48.416 19:19:37 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:48.416 19:19:37 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:48.416 19:19:37 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:48.416 19:19:37 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:48.416 19:19:37 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:48.416 19:19:37 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:48.416 19:19:37 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:48.416 19:19:37 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:48.416 19:19:37 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:48.416 19:19:37 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:48.416 19:19:37 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:48.416 19:19:37 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:48.416 19:19:37 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:48.416 19:19:37 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:48.416 19:19:37 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:48.416 19:19:37 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:48.416 19:19:37 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:48.416 19:19:37 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:48.416 19:19:37 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:48.416 19:19:37 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:48.416 19:19:37 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:48.416 19:19:37 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:48.416 19:19:37 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:48.416 No valid GPT data, bailing 00:03:48.416 19:19:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:48.416 19:19:38 -- scripts/common.sh@391 -- # pt= 00:03:48.416 19:19:38 -- scripts/common.sh@392 -- # return 1 00:03:48.416 19:19:38 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:48.416 1+0 records in 00:03:48.416 1+0 records out 00:03:48.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435806 s, 241 MB/s 00:03:48.416 19:19:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:48.416 19:19:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:48.416 19:19:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:48.416 19:19:38 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:48.416 19:19:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:48.416 No valid GPT data, bailing 00:03:48.416 19:19:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:48.416 19:19:38 -- scripts/common.sh@391 -- # pt= 00:03:48.416 19:19:38 -- scripts/common.sh@392 -- # return 1 00:03:48.416 19:19:38 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:48.416 1+0 records in 00:03:48.416 1+0 records out 00:03:48.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00327455 s, 320 MB/s 00:03:48.416 19:19:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:48.416 19:19:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:48.416 19:19:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:48.416 19:19:38 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:48.416 19:19:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:48.416 No valid GPT data, bailing 00:03:48.416 19:19:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:48.416 19:19:38 -- scripts/common.sh@391 -- # pt= 00:03:48.416 19:19:38 -- scripts/common.sh@392 -- # return 1 00:03:48.416 19:19:38 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:48.674 1+0 records in 00:03:48.674 1+0 records out 00:03:48.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436384 s, 240 MB/s 00:03:48.674 19:19:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:48.674 19:19:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:48.674 19:19:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:48.674 19:19:38 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:48.674 19:19:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:48.674 No valid GPT data, bailing 00:03:48.674 19:19:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:48.674 19:19:38 -- scripts/common.sh@391 -- # pt= 00:03:48.674 19:19:38 -- scripts/common.sh@392 -- # return 1 00:03:48.674 19:19:38 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:48.674 1+0 records in 00:03:48.674 1+0 records out 00:03:48.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00425952 s, 246 MB/s 00:03:48.674 19:19:38 -- spdk/autotest.sh@118 -- # sync 00:03:48.674 19:19:38 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:48.674 19:19:38 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:48.674 19:19:38 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:50.576 19:19:40 -- spdk/autotest.sh@124 -- # uname -s 00:03:50.576 19:19:40 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:50.576 19:19:40 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:50.576 19:19:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.576 19:19:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.576 19:19:40 -- common/autotest_common.sh@10 -- # set +x 00:03:50.576 ************************************ 00:03:50.576 START TEST setup.sh 00:03:50.576 ************************************ 00:03:50.576 19:19:40 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:50.576 * Looking for test storage... 00:03:50.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:50.576 19:19:40 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:50.576 19:19:40 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:50.576 19:19:40 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:50.576 19:19:40 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.576 19:19:40 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.576 19:19:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:50.576 ************************************ 00:03:50.576 START TEST acl 00:03:50.576 ************************************ 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:50.576 * Looking for test storage... 00:03:50.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:50.576 19:19:40 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:50.576 19:19:40 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:50.576 19:19:40 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:50.576 19:19:40 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:50.576 19:19:40 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:50.576 19:19:40 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:50.576 19:19:40 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:50.576 19:19:40 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.576 19:19:40 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:51.509 19:19:41 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:51.509 19:19:41 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:51.509 19:19:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.509 19:19:41 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:51.509 19:19:41 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.509 19:19:41 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:52.074 19:19:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:52.074 19:19:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:52.074 19:19:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:52.074 Hugepages 00:03:52.074 node hugesize free / total 00:03:52.074 19:19:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:52.074 19:19:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:52.074 19:19:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:52.074 00:03:52.074 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:52.074 19:19:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:52.074 19:19:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:52.074 19:19:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:52.074 19:19:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:52.074 19:19:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:52.074 19:19:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:52.074 19:19:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:52.332 19:19:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:52.332 19:19:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:52.332 19:19:41 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:52.332 19:19:41 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:52.332 19:19:41 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:52.332 19:19:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:52.332 19:19:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:52.332 19:19:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:52.332 19:19:41 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:52.332 19:19:41 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:52.332 19:19:41 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:52.332 19:19:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:52.332 19:19:41 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:52.332 19:19:41 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:52.332 19:19:41 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.332 19:19:41 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.332 19:19:41 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:52.332 ************************************ 00:03:52.332 START TEST denied 00:03:52.332 ************************************ 00:03:52.332 19:19:41 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:52.332 19:19:41 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:52.332 19:19:41 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:52.332 19:19:41 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:52.332 19:19:41 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.332 19:19:41 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:53.266 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:53.266 19:19:42 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:53.266 19:19:42 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:53.266 19:19:42 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:53.266 19:19:42 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:53.266 19:19:42 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:53.266 19:19:42 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:53.266 19:19:42 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:53.266 19:19:42 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:53.267 19:19:42 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.267 19:19:42 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.830 00:03:53.830 real 0m1.402s 00:03:53.830 user 0m0.581s 00:03:53.830 sys 0m0.770s 00:03:53.830 19:19:43 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.830 19:19:43 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:53.830 ************************************ 00:03:53.830 END TEST denied 00:03:53.830 ************************************ 00:03:53.830 19:19:43 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:53.830 19:19:43 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:53.830 19:19:43 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.830 19:19:43 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.830 19:19:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:53.830 ************************************ 00:03:53.830 START TEST allowed 00:03:53.830 ************************************ 00:03:53.830 19:19:43 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:53.830 19:19:43 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:53.830 19:19:43 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:53.830 19:19:43 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.830 19:19:43 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:53.830 19:19:43 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:54.394 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:54.394 19:19:44 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:54.394 19:19:44 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:54.394 19:19:44 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:54.394 19:19:44 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:54.394 19:19:44 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:54.394 19:19:44 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:54.394 19:19:44 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:54.394 19:19:44 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:54.394 19:19:44 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.394 19:19:44 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:55.330 00:03:55.330 real 0m1.437s 00:03:55.330 user 0m0.644s 00:03:55.330 sys 0m0.799s 00:03:55.330 19:19:44 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.330 ************************************ 00:03:55.330 END TEST allowed 00:03:55.330 19:19:44 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:55.330 ************************************ 00:03:55.330 19:19:44 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:55.330 ************************************ 00:03:55.330 END TEST acl 00:03:55.330 ************************************ 00:03:55.330 00:03:55.330 real 0m4.612s 00:03:55.330 user 0m2.086s 00:03:55.330 sys 0m2.497s 00:03:55.330 19:19:44 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.331 19:19:44 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:55.331 19:19:44 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:55.331 19:19:44 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:55.331 19:19:44 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.331 19:19:44 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.331 19:19:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:55.331 ************************************ 00:03:55.331 START TEST hugepages 00:03:55.331 ************************************ 00:03:55.331 19:19:44 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:55.331 * Looking for test storage... 00:03:55.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5868296 kB' 'MemAvailable: 7379852 kB' 'Buffers: 2436 kB' 'Cached: 1722980 kB' 'SwapCached: 0 kB' 'Active: 476764 kB' 'Inactive: 1352728 kB' 'Active(anon): 114564 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352728 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 105764 kB' 'Mapped: 48604 kB' 'Shmem: 10488 kB' 'KReclaimable: 67132 kB' 'Slab: 140748 kB' 'SReclaimable: 67132 kB' 'SUnreclaim: 73616 kB' 'KernelStack: 6492 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 333736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.331 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:55.332 19:19:45 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:55.332 19:19:45 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.332 19:19:45 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.332 19:19:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.332 ************************************ 00:03:55.332 START TEST default_setup 00:03:55.332 ************************************ 00:03:55.332 19:19:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:55.332 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:55.332 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.332 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:55.332 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:55.332 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:55.332 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:55.332 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.332 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.333 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:55.333 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:55.333 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.333 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.333 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:55.333 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.333 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.333 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:55.333 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:55.333 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:55.333 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:55.333 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:55.333 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.333 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:55.899 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:56.160 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:56.160 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8001048 kB' 'MemAvailable: 9512424 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493384 kB' 'Inactive: 1352736 kB' 'Active(anon): 131184 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352736 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122152 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140372 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73616 kB' 'KernelStack: 6448 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:56.160 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8000544 kB' 'MemAvailable: 9511920 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493004 kB' 'Inactive: 1352736 kB' 'Active(anon): 130804 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352736 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121944 kB' 'Mapped: 48652 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140364 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73608 kB' 'KernelStack: 6416 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8000544 kB' 'MemAvailable: 9511920 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493004 kB' 'Inactive: 1352736 kB' 'Active(anon): 130804 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352736 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121944 kB' 'Mapped: 48652 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140364 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73608 kB' 'KernelStack: 6416 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.165 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.166 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:56.428 nr_hugepages=1024 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:56.428 resv_hugepages=0 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.428 surplus_hugepages=0 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.428 anon_hugepages=0 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8000544 kB' 'MemAvailable: 9511920 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 492776 kB' 'Inactive: 1352736 kB' 'Active(anon): 130576 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352736 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121972 kB' 'Mapped: 48652 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140360 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73604 kB' 'KernelStack: 6416 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.428 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.429 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8000544 kB' 'MemUsed: 4241428 kB' 'SwapCached: 0 kB' 'Active: 492776 kB' 'Inactive: 1352736 kB' 'Active(anon): 130576 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352736 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1725404 kB' 'Mapped: 48652 kB' 'AnonPages: 121712 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66756 kB' 'Slab: 140360 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.430 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.431 node0=1024 expecting 1024 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:56.431 00:03:56.431 real 0m0.945s 00:03:56.431 user 0m0.462s 00:03:56.431 sys 0m0.436s 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.431 19:19:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:56.431 ************************************ 00:03:56.431 END TEST default_setup 00:03:56.431 ************************************ 00:03:56.431 19:19:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:56.431 19:19:46 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:56.431 19:19:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.431 19:19:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.431 19:19:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.431 ************************************ 00:03:56.431 START TEST per_node_1G_alloc 00:03:56.431 ************************************ 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.431 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:56.693 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:56.693 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:56.693 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:56.693 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:56.693 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:56.693 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:56.693 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.693 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.693 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:56.693 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:56.693 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:56.693 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.693 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9048036 kB' 'MemAvailable: 10559416 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493616 kB' 'Inactive: 1352740 kB' 'Active(anon): 131416 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122536 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140368 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73612 kB' 'KernelStack: 6436 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.694 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9048036 kB' 'MemAvailable: 10559416 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493264 kB' 'Inactive: 1352740 kB' 'Active(anon): 131064 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122184 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140368 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73612 kB' 'KernelStack: 6404 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.695 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.696 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9048036 kB' 'MemAvailable: 10559416 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493312 kB' 'Inactive: 1352740 kB' 'Active(anon): 131112 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122004 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140372 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73616 kB' 'KernelStack: 6432 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.697 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.698 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.959 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.959 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.959 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.959 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.959 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.959 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.959 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.959 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.959 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.959 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:56.960 nr_hugepages=512 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:56.960 resv_hugepages=0 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.960 surplus_hugepages=0 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.960 anon_hugepages=0 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.960 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9048036 kB' 'MemAvailable: 10559416 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493136 kB' 'Inactive: 1352740 kB' 'Active(anon): 130936 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122076 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140372 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73616 kB' 'KernelStack: 6432 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.961 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9048036 kB' 'MemUsed: 3193936 kB' 'SwapCached: 0 kB' 'Active: 493116 kB' 'Inactive: 1352740 kB' 'Active(anon): 130916 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1725404 kB' 'Mapped: 48656 kB' 'AnonPages: 122040 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66756 kB' 'Slab: 140372 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.962 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.963 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.964 node0=512 expecting 512 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:56.964 00:03:56.964 real 0m0.500s 00:03:56.964 user 0m0.250s 00:03:56.964 sys 0m0.284s 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.964 19:19:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:56.964 ************************************ 00:03:56.964 END TEST per_node_1G_alloc 00:03:56.964 ************************************ 00:03:56.964 19:19:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:56.964 19:19:46 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:56.964 19:19:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.964 19:19:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.964 19:19:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.964 ************************************ 00:03:56.964 START TEST even_2G_alloc 00:03:56.964 ************************************ 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.964 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:57.225 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:57.225 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:57.225 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7998672 kB' 'MemAvailable: 9510056 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493600 kB' 'Inactive: 1352744 kB' 'Active(anon): 131400 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122548 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140404 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73648 kB' 'KernelStack: 6464 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.225 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.226 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7998932 kB' 'MemAvailable: 9510316 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493360 kB' 'Inactive: 1352744 kB' 'Active(anon): 131160 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122232 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140408 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73652 kB' 'KernelStack: 6496 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.227 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.228 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7999012 kB' 'MemAvailable: 9510396 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493300 kB' 'Inactive: 1352744 kB' 'Active(anon): 131100 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122172 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140404 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73648 kB' 'KernelStack: 6432 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.229 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.230 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.230 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.230 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.230 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.230 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.230 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.230 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.230 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.230 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.230 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.230 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.230 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.230 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.230 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.230 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.230 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.490 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:57.491 nr_hugepages=1024 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:57.491 resv_hugepages=0 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.491 surplus_hugepages=0 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.491 anon_hugepages=0 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7999012 kB' 'MemAvailable: 9510396 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493032 kB' 'Inactive: 1352744 kB' 'Active(anon): 130832 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121952 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140404 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73648 kB' 'KernelStack: 6432 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.491 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.492 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7999012 kB' 'MemUsed: 4242960 kB' 'SwapCached: 0 kB' 'Active: 493212 kB' 'Inactive: 1352744 kB' 'Active(anon): 131012 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1725408 kB' 'Mapped: 48656 kB' 'AnonPages: 122132 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66756 kB' 'Slab: 140400 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.493 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.494 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.494 node0=1024 expecting 1024 00:03:57.495 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:57.495 19:19:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:57.495 00:03:57.495 real 0m0.486s 00:03:57.495 user 0m0.260s 00:03:57.495 sys 0m0.256s 00:03:57.495 19:19:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.495 19:19:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:57.495 ************************************ 00:03:57.495 END TEST even_2G_alloc 00:03:57.495 ************************************ 00:03:57.495 19:19:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:57.495 19:19:47 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:57.495 19:19:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.495 19:19:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.495 19:19:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.495 ************************************ 00:03:57.495 START TEST odd_alloc 00:03:57.495 ************************************ 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.495 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:57.754 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:57.754 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:57.754 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7993308 kB' 'MemAvailable: 9504692 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493684 kB' 'Inactive: 1352744 kB' 'Active(anon): 131484 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122620 kB' 'Mapped: 48704 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140476 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73720 kB' 'KernelStack: 6396 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7993880 kB' 'MemAvailable: 9505264 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493216 kB' 'Inactive: 1352744 kB' 'Active(anon): 131016 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122132 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140500 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73744 kB' 'KernelStack: 6448 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.016 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7993880 kB' 'MemAvailable: 9505264 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 492960 kB' 'Inactive: 1352744 kB' 'Active(anon): 130760 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 121876 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140500 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73744 kB' 'KernelStack: 6448 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.017 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.018 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:58.019 nr_hugepages=1025 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:58.019 resv_hugepages=0 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.019 surplus_hugepages=0 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.019 anon_hugepages=0 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7993880 kB' 'MemAvailable: 9505264 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493232 kB' 'Inactive: 1352744 kB' 'Active(anon): 131032 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122148 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140496 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73740 kB' 'KernelStack: 6448 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.019 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.020 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7993880 kB' 'MemUsed: 4248092 kB' 'SwapCached: 0 kB' 'Active: 493192 kB' 'Inactive: 1352744 kB' 'Active(anon): 130992 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1725408 kB' 'Mapped: 48656 kB' 'AnonPages: 122132 kB' 'Shmem: 10464 kB' 'KernelStack: 6448 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66756 kB' 'Slab: 140496 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.021 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:58.022 node0=1025 expecting 1025 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:58.022 00:03:58.022 real 0m0.515s 00:03:58.022 user 0m0.280s 00:03:58.022 sys 0m0.268s 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.022 19:19:47 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:58.022 ************************************ 00:03:58.022 END TEST odd_alloc 00:03:58.022 ************************************ 00:03:58.022 19:19:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:58.022 19:19:47 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:58.022 19:19:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.022 19:19:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.022 19:19:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.022 ************************************ 00:03:58.022 START TEST custom_alloc 00:03:58.022 ************************************ 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.022 19:19:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:58.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.280 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:58.280 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:58.280 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9044016 kB' 'MemAvailable: 10555400 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493620 kB' 'Inactive: 1352744 kB' 'Active(anon): 131420 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122584 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140512 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73756 kB' 'KernelStack: 6420 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.281 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.282 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.282 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.282 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.282 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.282 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.282 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.282 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9044016 kB' 'MemAvailable: 10555400 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 492932 kB' 'Inactive: 1352744 kB' 'Active(anon): 130732 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122132 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140516 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73760 kB' 'KernelStack: 6448 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.545 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.546 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9044016 kB' 'MemAvailable: 10555400 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 492960 kB' 'Inactive: 1352744 kB' 'Active(anon): 130760 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122136 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140516 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73760 kB' 'KernelStack: 6448 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.547 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.548 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:58.549 nr_hugepages=512 00:03:58.549 resv_hugepages=0 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.549 surplus_hugepages=0 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.549 anon_hugepages=0 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.549 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9044016 kB' 'MemAvailable: 10555400 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 492920 kB' 'Inactive: 1352744 kB' 'Active(anon): 130720 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122088 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140504 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73748 kB' 'KernelStack: 6432 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.551 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9044016 kB' 'MemUsed: 3197956 kB' 'SwapCached: 0 kB' 'Active: 492916 kB' 'Inactive: 1352744 kB' 'Active(anon): 130716 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 1725408 kB' 'Mapped: 48656 kB' 'AnonPages: 122100 kB' 'Shmem: 10464 kB' 'KernelStack: 6432 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66756 kB' 'Slab: 140504 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.552 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.553 node0=512 expecting 512 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:58.553 00:03:58.553 real 0m0.546s 00:03:58.553 user 0m0.256s 00:03:58.553 sys 0m0.295s 00:03:58.553 19:19:48 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.554 19:19:48 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:58.554 ************************************ 00:03:58.554 END TEST custom_alloc 00:03:58.554 ************************************ 00:03:58.554 19:19:48 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:58.554 19:19:48 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:58.554 19:19:48 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.554 19:19:48 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.554 19:19:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.554 ************************************ 00:03:58.554 START TEST no_shrink_alloc 00:03:58.554 ************************************ 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.554 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:58.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.812 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:58.812 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7994008 kB' 'MemAvailable: 9505392 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493272 kB' 'Inactive: 1352744 kB' 'Active(anon): 131072 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122216 kB' 'Mapped: 48780 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140504 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73748 kB' 'KernelStack: 6452 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7994008 kB' 'MemAvailable: 9505392 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 492996 kB' 'Inactive: 1352744 kB' 'Active(anon): 130796 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 121904 kB' 'Mapped: 48780 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140496 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73740 kB' 'KernelStack: 6436 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7994008 kB' 'MemAvailable: 9505392 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493048 kB' 'Inactive: 1352744 kB' 'Active(anon): 130848 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 121920 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140488 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73732 kB' 'KernelStack: 6432 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.076 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.077 nr_hugepages=1024 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.077 resv_hugepages=0 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.077 surplus_hugepages=0 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.077 anon_hugepages=0 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7994008 kB' 'MemAvailable: 9505392 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493020 kB' 'Inactive: 1352744 kB' 'Active(anon): 130820 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122152 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140488 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73732 kB' 'KernelStack: 6416 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.077 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7994008 kB' 'MemUsed: 4247964 kB' 'SwapCached: 0 kB' 'Active: 492848 kB' 'Inactive: 1352744 kB' 'Active(anon): 130648 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1725408 kB' 'Mapped: 48656 kB' 'AnonPages: 122020 kB' 'Shmem: 10464 kB' 'KernelStack: 6452 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66756 kB' 'Slab: 140484 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.078 node0=1024 expecting 1024 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:59.078 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:59.079 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.079 19:19:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:59.337 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.337 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:59.337 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:59.337 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7993032 kB' 'MemAvailable: 9504416 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493896 kB' 'Inactive: 1352744 kB' 'Active(anon): 131696 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122856 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140532 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73776 kB' 'KernelStack: 6468 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.599 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.599 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.599 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.599 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.599 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.599 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.599 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.599 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.599 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.599 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.599 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.599 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.599 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.599 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.599 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.600 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7993032 kB' 'MemAvailable: 9504416 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493696 kB' 'Inactive: 1352744 kB' 'Active(anon): 131496 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122396 kB' 'Mapped: 48776 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140516 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73760 kB' 'KernelStack: 6404 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.601 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7992780 kB' 'MemAvailable: 9504164 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493212 kB' 'Inactive: 1352744 kB' 'Active(anon): 131012 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122120 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140524 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73768 kB' 'KernelStack: 6432 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:59.602 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.603 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.604 nr_hugepages=1024 00:03:59.604 resv_hugepages=0 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.604 surplus_hugepages=0 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.604 anon_hugepages=0 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7992780 kB' 'MemAvailable: 9504164 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493168 kB' 'Inactive: 1352744 kB' 'Active(anon): 130968 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122076 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 66756 kB' 'Slab: 140516 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73760 kB' 'KernelStack: 6416 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.604 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.605 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7992780 kB' 'MemUsed: 4249192 kB' 'SwapCached: 0 kB' 'Active: 493028 kB' 'Inactive: 1352744 kB' 'Active(anon): 130828 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1725408 kB' 'Mapped: 48656 kB' 'AnonPages: 121936 kB' 'Shmem: 10464 kB' 'KernelStack: 6432 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66756 kB' 'Slab: 140516 kB' 'SReclaimable: 66756 kB' 'SUnreclaim: 73760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.606 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.607 node0=1024 expecting 1024 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:59.607 00:03:59.607 real 0m0.976s 00:03:59.607 user 0m0.499s 00:03:59.607 sys 0m0.543s 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.607 19:19:49 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.607 ************************************ 00:03:59.607 END TEST no_shrink_alloc 00:03:59.607 ************************************ 00:03:59.607 19:19:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:59.607 19:19:49 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:59.607 19:19:49 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:59.607 19:19:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.607 19:19:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.607 19:19:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.608 19:19:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.608 19:19:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.608 19:19:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:59.608 19:19:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:59.608 00:03:59.608 real 0m4.381s 00:03:59.608 user 0m2.159s 00:03:59.608 sys 0m2.334s 00:03:59.608 19:19:49 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.608 19:19:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.608 ************************************ 00:03:59.608 END TEST hugepages 00:03:59.608 ************************************ 00:03:59.608 19:19:49 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:59.608 19:19:49 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:59.608 19:19:49 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.608 19:19:49 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.608 19:19:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:59.608 ************************************ 00:03:59.608 START TEST driver 00:03:59.608 ************************************ 00:03:59.608 19:19:49 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:59.865 * Looking for test storage... 00:03:59.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:59.865 19:19:49 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:59.865 19:19:49 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.865 19:19:49 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.431 19:19:50 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:00.431 19:19:50 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.431 19:19:50 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.431 19:19:50 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:00.431 ************************************ 00:04:00.431 START TEST guess_driver 00:04:00.431 ************************************ 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:00.431 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:00.431 Looking for driver=uio_pci_generic 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.431 19:19:50 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:00.997 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:00.997 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:00.997 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.997 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.997 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:00.997 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.256 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.256 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:01.256 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.256 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:01.256 19:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:01.256 19:19:50 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.256 19:19:50 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.823 00:04:01.823 real 0m1.377s 00:04:01.823 user 0m0.564s 00:04:01.823 sys 0m0.805s 00:04:01.823 19:19:51 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.823 19:19:51 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:01.823 ************************************ 00:04:01.823 END TEST guess_driver 00:04:01.823 ************************************ 00:04:01.823 19:19:51 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:01.823 00:04:01.823 real 0m2.065s 00:04:01.823 user 0m0.803s 00:04:01.823 sys 0m1.298s 00:04:01.823 19:19:51 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.823 19:19:51 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:01.823 ************************************ 00:04:01.823 END TEST driver 00:04:01.823 ************************************ 00:04:01.823 19:19:51 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:01.823 19:19:51 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:01.823 19:19:51 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.823 19:19:51 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.823 19:19:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:01.823 ************************************ 00:04:01.823 START TEST devices 00:04:01.823 ************************************ 00:04:01.823 19:19:51 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:01.823 * Looking for test storage... 00:04:01.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:01.823 19:19:51 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:01.823 19:19:51 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:01.823 19:19:51 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.823 19:19:51 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:02.766 19:19:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:02.766 19:19:52 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:02.766 19:19:52 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:02.766 No valid GPT data, bailing 00:04:02.766 19:19:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:02.766 19:19:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:02.766 19:19:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:02.766 19:19:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:02.766 19:19:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:02.766 19:19:52 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:02.766 19:19:52 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:02.766 19:19:52 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:02.766 No valid GPT data, bailing 00:04:02.766 19:19:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:02.766 19:19:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:02.766 19:19:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:02.766 19:19:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:02.766 19:19:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:02.766 19:19:52 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:02.766 19:19:52 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:02.766 19:19:52 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:02.766 No valid GPT data, bailing 00:04:02.766 19:19:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:02.766 19:19:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:02.766 19:19:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:02.766 19:19:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:02.766 19:19:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:02.766 19:19:52 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:02.766 19:19:52 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:02.767 19:19:52 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:02.767 19:19:52 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:02.767 19:19:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:02.767 19:19:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:02.767 19:19:52 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:02.767 19:19:52 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:02.767 19:19:52 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:02.767 19:19:52 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:02.767 19:19:52 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:03.044 No valid GPT data, bailing 00:04:03.044 19:19:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:03.044 19:19:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:03.044 19:19:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:03.044 19:19:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:03.044 19:19:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:03.044 19:19:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:03.044 19:19:52 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:03.044 19:19:52 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:03.044 19:19:52 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:03.044 19:19:52 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:03.044 19:19:52 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:03.044 19:19:52 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:03.044 19:19:52 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:03.044 19:19:52 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.044 19:19:52 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.044 19:19:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:03.044 ************************************ 00:04:03.044 START TEST nvme_mount 00:04:03.044 ************************************ 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:03.044 19:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:03.979 Creating new GPT entries in memory. 00:04:03.979 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:03.979 other utilities. 00:04:03.979 19:19:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:03.979 19:19:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:03.979 19:19:53 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:03.979 19:19:53 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:03.979 19:19:53 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:04.912 Creating new GPT entries in memory. 00:04:04.912 The operation has completed successfully. 00:04:04.912 19:19:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:04.912 19:19:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.912 19:19:54 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58923 00:04:04.912 19:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:04.912 19:19:54 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:04.912 19:19:54 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:04.912 19:19:54 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:04.912 19:19:54 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:04.912 19:19:54 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:05.171 19:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.430 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:05.430 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.430 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:05.430 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.430 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:05.430 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:05.430 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:05.430 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:05.430 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:05.430 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:05.430 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:05.430 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:05.690 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:05.690 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:05.690 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:05.690 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:05.690 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:05.948 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:05.949 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:05.949 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:05.949 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.949 19:19:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:06.207 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.207 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:06.207 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:06.207 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.207 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.207 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.207 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.207 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.207 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.207 19:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.466 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.466 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:06.466 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:06.466 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:06.466 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:06.466 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:06.466 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:06.466 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:06.466 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:06.466 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:06.466 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:06.466 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:06.466 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:06.466 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:06.466 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.466 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:06.466 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:06.466 19:19:56 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.466 19:19:56 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:06.724 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.724 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:06.724 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:06.724 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.724 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.724 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.724 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.724 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.984 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.984 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.984 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.984 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:06.984 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:06.984 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:06.984 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:06.984 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.984 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.984 19:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:06.984 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:06.984 00:04:06.984 real 0m3.998s 00:04:06.984 user 0m0.684s 00:04:06.984 sys 0m0.994s 00:04:06.984 19:19:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.984 19:19:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:06.984 ************************************ 00:04:06.984 END TEST nvme_mount 00:04:06.984 ************************************ 00:04:06.984 19:19:56 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:06.984 19:19:56 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:06.984 19:19:56 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.984 19:19:56 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.984 19:19:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:06.984 ************************************ 00:04:06.984 START TEST dm_mount 00:04:06.984 ************************************ 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:06.984 19:19:56 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:07.919 Creating new GPT entries in memory. 00:04:07.919 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:07.919 other utilities. 00:04:07.919 19:19:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:07.919 19:19:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:07.919 19:19:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:07.919 19:19:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:07.919 19:19:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:09.294 Creating new GPT entries in memory. 00:04:09.294 The operation has completed successfully. 00:04:09.294 19:19:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:09.294 19:19:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:09.294 19:19:58 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:09.294 19:19:58 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:09.294 19:19:58 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:10.227 The operation has completed successfully. 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59357 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.227 19:19:59 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:10.227 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:10.227 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:10.227 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:10.228 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.228 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:10.228 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.488 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:10.488 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.488 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:10.488 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:10.749 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.007 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:11.007 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.007 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:11.007 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.007 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:11.007 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:11.007 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:11.007 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:11.007 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:11.007 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:11.007 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:11.265 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:11.265 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:11.265 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:11.265 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:11.265 19:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:11.265 00:04:11.265 real 0m4.184s 00:04:11.265 user 0m0.456s 00:04:11.265 sys 0m0.693s 00:04:11.265 ************************************ 00:04:11.265 END TEST dm_mount 00:04:11.265 ************************************ 00:04:11.265 19:20:00 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.265 19:20:00 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:11.265 19:20:00 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:11.265 19:20:00 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:11.265 19:20:00 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:11.265 19:20:00 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.265 19:20:00 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:11.265 19:20:00 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:11.265 19:20:00 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:11.265 19:20:00 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:11.522 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:11.522 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:11.522 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:11.522 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:11.522 19:20:01 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:11.522 19:20:01 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:11.522 19:20:01 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:11.522 19:20:01 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:11.523 19:20:01 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:11.523 19:20:01 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:11.523 19:20:01 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:11.523 00:04:11.523 real 0m9.707s 00:04:11.523 user 0m1.824s 00:04:11.523 sys 0m2.250s 00:04:11.523 19:20:01 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.523 ************************************ 00:04:11.523 END TEST devices 00:04:11.523 19:20:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:11.523 ************************************ 00:04:11.523 19:20:01 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:11.523 00:04:11.523 real 0m21.047s 00:04:11.523 user 0m6.974s 00:04:11.523 sys 0m8.551s 00:04:11.523 19:20:01 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.523 ************************************ 00:04:11.523 END TEST setup.sh 00:04:11.523 ************************************ 00:04:11.523 19:20:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:11.523 19:20:01 -- common/autotest_common.sh@1142 -- # return 0 00:04:11.523 19:20:01 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:12.087 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.087 Hugepages 00:04:12.087 node hugesize free / total 00:04:12.087 node0 1048576kB 0 / 0 00:04:12.344 node0 2048kB 2048 / 2048 00:04:12.344 00:04:12.344 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:12.344 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:12.344 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:12.344 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:12.344 19:20:02 -- spdk/autotest.sh@130 -- # uname -s 00:04:12.344 19:20:02 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:12.344 19:20:02 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:12.344 19:20:02 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:13.277 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.277 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.277 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.277 19:20:02 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:14.276 19:20:03 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:14.276 19:20:03 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:14.276 19:20:03 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:14.276 19:20:03 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:14.276 19:20:03 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:14.276 19:20:03 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:14.276 19:20:03 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:14.276 19:20:03 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:14.276 19:20:03 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:14.276 19:20:04 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:14.276 19:20:04 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:14.276 19:20:04 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.532 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.789 Waiting for block devices as requested 00:04:14.789 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:14.789 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:14.789 19:20:04 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:14.789 19:20:04 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:15.046 19:20:04 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:15.046 19:20:04 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:15.046 19:20:04 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:15.046 19:20:04 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:15.046 19:20:04 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:15.046 19:20:04 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:15.046 19:20:04 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:15.046 19:20:04 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:15.046 19:20:04 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:15.046 19:20:04 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:15.046 19:20:04 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:15.046 19:20:04 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:15.046 19:20:04 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:15.046 19:20:04 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:15.046 19:20:04 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:15.046 19:20:04 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:15.046 19:20:04 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:15.046 19:20:04 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:15.046 19:20:04 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:15.046 19:20:04 -- common/autotest_common.sh@1557 -- # continue 00:04:15.046 19:20:04 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:15.046 19:20:04 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:15.046 19:20:04 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:15.046 19:20:04 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:15.046 19:20:04 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:15.046 19:20:04 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:15.046 19:20:04 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:15.046 19:20:04 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:15.046 19:20:04 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:15.046 19:20:04 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:15.046 19:20:04 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:15.046 19:20:04 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:15.046 19:20:04 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:15.046 19:20:04 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:15.046 19:20:04 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:15.046 19:20:04 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:15.046 19:20:04 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:15.046 19:20:04 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:15.046 19:20:04 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:15.046 19:20:04 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:15.046 19:20:04 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:15.046 19:20:04 -- common/autotest_common.sh@1557 -- # continue 00:04:15.046 19:20:04 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:15.046 19:20:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:15.046 19:20:04 -- common/autotest_common.sh@10 -- # set +x 00:04:15.046 19:20:04 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:15.046 19:20:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:15.046 19:20:04 -- common/autotest_common.sh@10 -- # set +x 00:04:15.046 19:20:04 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:15.613 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.613 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.872 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.872 19:20:05 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:15.872 19:20:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:15.872 19:20:05 -- common/autotest_common.sh@10 -- # set +x 00:04:15.872 19:20:05 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:15.872 19:20:05 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:15.872 19:20:05 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:15.872 19:20:05 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:15.872 19:20:05 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:15.872 19:20:05 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:15.872 19:20:05 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:15.872 19:20:05 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:15.872 19:20:05 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:15.872 19:20:05 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:15.872 19:20:05 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:15.872 19:20:05 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:15.872 19:20:05 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:15.872 19:20:05 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:15.872 19:20:05 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:15.872 19:20:05 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:15.872 19:20:05 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:15.872 19:20:05 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:15.872 19:20:05 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:15.872 19:20:05 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:15.872 19:20:05 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:15.872 19:20:05 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:15.872 19:20:05 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:15.872 19:20:05 -- common/autotest_common.sh@1593 -- # return 0 00:04:15.872 19:20:05 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:15.872 19:20:05 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:15.872 19:20:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:15.872 19:20:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:15.872 19:20:05 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:15.872 19:20:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:15.872 19:20:05 -- common/autotest_common.sh@10 -- # set +x 00:04:15.872 19:20:05 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:15.872 19:20:05 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:15.872 19:20:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.872 19:20:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.872 19:20:05 -- common/autotest_common.sh@10 -- # set +x 00:04:15.872 ************************************ 00:04:15.872 START TEST env 00:04:15.872 ************************************ 00:04:15.872 19:20:05 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:16.130 * Looking for test storage... 00:04:16.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:16.130 19:20:05 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:16.130 19:20:05 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.130 19:20:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.130 19:20:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.130 ************************************ 00:04:16.130 START TEST env_memory 00:04:16.130 ************************************ 00:04:16.130 19:20:05 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:16.130 00:04:16.130 00:04:16.130 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.130 http://cunit.sourceforge.net/ 00:04:16.130 00:04:16.130 00:04:16.130 Suite: memory 00:04:16.130 Test: alloc and free memory map ...[2024-07-15 19:20:05.789584] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:16.130 passed 00:04:16.130 Test: mem map translation ...[2024-07-15 19:20:05.826872] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:16.130 [2024-07-15 19:20:05.826971] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:16.130 [2024-07-15 19:20:05.827047] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:16.130 [2024-07-15 19:20:05.827069] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:16.130 passed 00:04:16.130 Test: mem map registration ...[2024-07-15 19:20:05.897444] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:16.130 [2024-07-15 19:20:05.897508] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:16.130 passed 00:04:16.389 Test: mem map adjacent registrations ...passed 00:04:16.389 00:04:16.389 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.389 suites 1 1 n/a 0 0 00:04:16.389 tests 4 4 4 0 0 00:04:16.389 asserts 152 152 152 0 n/a 00:04:16.389 00:04:16.389 Elapsed time = 0.224 seconds 00:04:16.389 00:04:16.389 real 0m0.244s 00:04:16.389 user 0m0.221s 00:04:16.389 sys 0m0.016s 00:04:16.389 19:20:05 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.389 19:20:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:16.389 ************************************ 00:04:16.389 END TEST env_memory 00:04:16.389 ************************************ 00:04:16.389 19:20:06 env -- common/autotest_common.sh@1142 -- # return 0 00:04:16.389 19:20:06 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:16.389 19:20:06 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.389 19:20:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.389 19:20:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.389 ************************************ 00:04:16.389 START TEST env_vtophys 00:04:16.389 ************************************ 00:04:16.389 19:20:06 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:16.389 EAL: lib.eal log level changed from notice to debug 00:04:16.389 EAL: Detected lcore 0 as core 0 on socket 0 00:04:16.389 EAL: Detected lcore 1 as core 0 on socket 0 00:04:16.389 EAL: Detected lcore 2 as core 0 on socket 0 00:04:16.389 EAL: Detected lcore 3 as core 0 on socket 0 00:04:16.389 EAL: Detected lcore 4 as core 0 on socket 0 00:04:16.389 EAL: Detected lcore 5 as core 0 on socket 0 00:04:16.389 EAL: Detected lcore 6 as core 0 on socket 0 00:04:16.389 EAL: Detected lcore 7 as core 0 on socket 0 00:04:16.389 EAL: Detected lcore 8 as core 0 on socket 0 00:04:16.389 EAL: Detected lcore 9 as core 0 on socket 0 00:04:16.389 EAL: Maximum logical cores by configuration: 128 00:04:16.389 EAL: Detected CPU lcores: 10 00:04:16.389 EAL: Detected NUMA nodes: 1 00:04:16.389 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:16.389 EAL: Detected shared linkage of DPDK 00:04:16.389 EAL: No shared files mode enabled, IPC will be disabled 00:04:16.389 EAL: Selected IOVA mode 'PA' 00:04:16.389 EAL: Probing VFIO support... 00:04:16.389 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:16.389 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:16.389 EAL: Ask a virtual area of 0x2e000 bytes 00:04:16.389 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:16.389 EAL: Setting up physically contiguous memory... 00:04:16.389 EAL: Setting maximum number of open files to 524288 00:04:16.389 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:16.389 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:16.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.389 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:16.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.389 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.389 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:16.389 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:16.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.389 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:16.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.389 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.389 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:16.389 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:16.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.389 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:16.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.389 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.389 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:16.389 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:16.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.389 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:16.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.389 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.389 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:16.389 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:16.389 EAL: Hugepages will be freed exactly as allocated. 00:04:16.389 EAL: No shared files mode enabled, IPC is disabled 00:04:16.389 EAL: No shared files mode enabled, IPC is disabled 00:04:16.389 EAL: TSC frequency is ~2200000 KHz 00:04:16.389 EAL: Main lcore 0 is ready (tid=7f48b5811a00;cpuset=[0]) 00:04:16.390 EAL: Trying to obtain current memory policy. 00:04:16.390 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.390 EAL: Restoring previous memory policy: 0 00:04:16.390 EAL: request: mp_malloc_sync 00:04:16.390 EAL: No shared files mode enabled, IPC is disabled 00:04:16.390 EAL: Heap on socket 0 was expanded by 2MB 00:04:16.390 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:16.390 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:16.390 EAL: Mem event callback 'spdk:(nil)' registered 00:04:16.390 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:16.648 00:04:16.648 00:04:16.648 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.648 http://cunit.sourceforge.net/ 00:04:16.648 00:04:16.648 00:04:16.648 Suite: components_suite 00:04:16.648 Test: vtophys_malloc_test ...passed 00:04:16.648 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:16.648 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.648 EAL: Restoring previous memory policy: 4 00:04:16.648 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.648 EAL: request: mp_malloc_sync 00:04:16.648 EAL: No shared files mode enabled, IPC is disabled 00:04:16.648 EAL: Heap on socket 0 was expanded by 4MB 00:04:16.648 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.648 EAL: request: mp_malloc_sync 00:04:16.648 EAL: No shared files mode enabled, IPC is disabled 00:04:16.648 EAL: Heap on socket 0 was shrunk by 4MB 00:04:16.648 EAL: Trying to obtain current memory policy. 00:04:16.648 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.648 EAL: Restoring previous memory policy: 4 00:04:16.648 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.648 EAL: request: mp_malloc_sync 00:04:16.648 EAL: No shared files mode enabled, IPC is disabled 00:04:16.648 EAL: Heap on socket 0 was expanded by 6MB 00:04:16.648 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.648 EAL: request: mp_malloc_sync 00:04:16.648 EAL: No shared files mode enabled, IPC is disabled 00:04:16.648 EAL: Heap on socket 0 was shrunk by 6MB 00:04:16.648 EAL: Trying to obtain current memory policy. 00:04:16.648 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.648 EAL: Restoring previous memory policy: 4 00:04:16.648 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.648 EAL: request: mp_malloc_sync 00:04:16.648 EAL: No shared files mode enabled, IPC is disabled 00:04:16.648 EAL: Heap on socket 0 was expanded by 10MB 00:04:16.648 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.648 EAL: request: mp_malloc_sync 00:04:16.648 EAL: No shared files mode enabled, IPC is disabled 00:04:16.648 EAL: Heap on socket 0 was shrunk by 10MB 00:04:16.648 EAL: Trying to obtain current memory policy. 00:04:16.648 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.648 EAL: Restoring previous memory policy: 4 00:04:16.648 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.648 EAL: request: mp_malloc_sync 00:04:16.648 EAL: No shared files mode enabled, IPC is disabled 00:04:16.648 EAL: Heap on socket 0 was expanded by 18MB 00:04:16.648 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.648 EAL: request: mp_malloc_sync 00:04:16.648 EAL: No shared files mode enabled, IPC is disabled 00:04:16.648 EAL: Heap on socket 0 was shrunk by 18MB 00:04:16.648 EAL: Trying to obtain current memory policy. 00:04:16.648 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.648 EAL: Restoring previous memory policy: 4 00:04:16.648 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.648 EAL: request: mp_malloc_sync 00:04:16.648 EAL: No shared files mode enabled, IPC is disabled 00:04:16.648 EAL: Heap on socket 0 was expanded by 34MB 00:04:16.648 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.648 EAL: request: mp_malloc_sync 00:04:16.648 EAL: No shared files mode enabled, IPC is disabled 00:04:16.648 EAL: Heap on socket 0 was shrunk by 34MB 00:04:16.648 EAL: Trying to obtain current memory policy. 00:04:16.648 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.648 EAL: Restoring previous memory policy: 4 00:04:16.648 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.648 EAL: request: mp_malloc_sync 00:04:16.648 EAL: No shared files mode enabled, IPC is disabled 00:04:16.648 EAL: Heap on socket 0 was expanded by 66MB 00:04:16.648 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.648 EAL: request: mp_malloc_sync 00:04:16.648 EAL: No shared files mode enabled, IPC is disabled 00:04:16.648 EAL: Heap on socket 0 was shrunk by 66MB 00:04:16.648 EAL: Trying to obtain current memory policy. 00:04:16.648 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.648 EAL: Restoring previous memory policy: 4 00:04:16.648 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.648 EAL: request: mp_malloc_sync 00:04:16.648 EAL: No shared files mode enabled, IPC is disabled 00:04:16.648 EAL: Heap on socket 0 was expanded by 130MB 00:04:16.648 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.648 EAL: request: mp_malloc_sync 00:04:16.648 EAL: No shared files mode enabled, IPC is disabled 00:04:16.648 EAL: Heap on socket 0 was shrunk by 130MB 00:04:16.648 EAL: Trying to obtain current memory policy. 00:04:16.648 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.648 EAL: Restoring previous memory policy: 4 00:04:16.648 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.648 EAL: request: mp_malloc_sync 00:04:16.648 EAL: No shared files mode enabled, IPC is disabled 00:04:16.648 EAL: Heap on socket 0 was expanded by 258MB 00:04:16.648 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.648 EAL: request: mp_malloc_sync 00:04:16.648 EAL: No shared files mode enabled, IPC is disabled 00:04:16.648 EAL: Heap on socket 0 was shrunk by 258MB 00:04:16.648 EAL: Trying to obtain current memory policy. 00:04:16.648 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.908 EAL: Restoring previous memory policy: 4 00:04:16.908 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.908 EAL: request: mp_malloc_sync 00:04:16.908 EAL: No shared files mode enabled, IPC is disabled 00:04:16.908 EAL: Heap on socket 0 was expanded by 514MB 00:04:16.908 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.908 EAL: request: mp_malloc_sync 00:04:16.908 EAL: No shared files mode enabled, IPC is disabled 00:04:16.908 EAL: Heap on socket 0 was shrunk by 514MB 00:04:16.908 EAL: Trying to obtain current memory policy. 00:04:16.908 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.167 EAL: Restoring previous memory policy: 4 00:04:17.167 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.167 EAL: request: mp_malloc_sync 00:04:17.167 EAL: No shared files mode enabled, IPC is disabled 00:04:17.167 EAL: Heap on socket 0 was expanded by 1026MB 00:04:17.167 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.167 passed 00:04:17.167 00:04:17.167 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.167 suites 1 1 n/a 0 0 00:04:17.167 tests 2 2 2 0 0 00:04:17.167 asserts 5232 5232 5232 0 n/a 00:04:17.167 00:04:17.167 Elapsed time = 0.731 seconds 00:04:17.167 EAL: request: mp_malloc_sync 00:04:17.167 EAL: No shared files mode enabled, IPC is disabled 00:04:17.167 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:17.167 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.167 EAL: request: mp_malloc_sync 00:04:17.167 EAL: No shared files mode enabled, IPC is disabled 00:04:17.167 EAL: Heap on socket 0 was shrunk by 2MB 00:04:17.167 EAL: No shared files mode enabled, IPC is disabled 00:04:17.167 EAL: No shared files mode enabled, IPC is disabled 00:04:17.167 EAL: No shared files mode enabled, IPC is disabled 00:04:17.167 00:04:17.167 real 0m0.937s 00:04:17.167 user 0m0.471s 00:04:17.167 sys 0m0.337s 00:04:17.167 19:20:06 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.167 ************************************ 00:04:17.167 END TEST env_vtophys 00:04:17.167 ************************************ 00:04:17.167 19:20:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:17.427 19:20:07 env -- common/autotest_common.sh@1142 -- # return 0 00:04:17.427 19:20:07 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:17.427 19:20:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.427 19:20:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.427 19:20:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.427 ************************************ 00:04:17.427 START TEST env_pci 00:04:17.427 ************************************ 00:04:17.427 19:20:07 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:17.427 00:04:17.427 00:04:17.427 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.427 http://cunit.sourceforge.net/ 00:04:17.427 00:04:17.427 00:04:17.427 Suite: pci 00:04:17.427 Test: pci_hook ...[2024-07-15 19:20:07.036935] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60539 has claimed it 00:04:17.427 passed 00:04:17.427 00:04:17.427 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.427 suites 1 1 n/a 0 0 00:04:17.427 tests 1 1 1 0 0 00:04:17.427 asserts 25 25 25 0 n/a 00:04:17.427 00:04:17.427 Elapsed time = 0.003 seconds 00:04:17.427 EAL: Cannot find device (10000:00:01.0) 00:04:17.427 EAL: Failed to attach device on primary process 00:04:17.427 00:04:17.427 real 0m0.024s 00:04:17.427 user 0m0.012s 00:04:17.427 sys 0m0.011s 00:04:17.427 19:20:07 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.427 ************************************ 00:04:17.427 END TEST env_pci 00:04:17.427 ************************************ 00:04:17.427 19:20:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:17.427 19:20:07 env -- common/autotest_common.sh@1142 -- # return 0 00:04:17.427 19:20:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:17.427 19:20:07 env -- env/env.sh@15 -- # uname 00:04:17.427 19:20:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:17.427 19:20:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:17.427 19:20:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:17.427 19:20:07 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:17.427 19:20:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.427 19:20:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.427 ************************************ 00:04:17.427 START TEST env_dpdk_post_init 00:04:17.427 ************************************ 00:04:17.427 19:20:07 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:17.427 EAL: Detected CPU lcores: 10 00:04:17.427 EAL: Detected NUMA nodes: 1 00:04:17.427 EAL: Detected shared linkage of DPDK 00:04:17.427 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:17.427 EAL: Selected IOVA mode 'PA' 00:04:17.686 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:17.686 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:17.686 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:17.686 Starting DPDK initialization... 00:04:17.686 Starting SPDK post initialization... 00:04:17.686 SPDK NVMe probe 00:04:17.686 Attaching to 0000:00:10.0 00:04:17.686 Attaching to 0000:00:11.0 00:04:17.686 Attached to 0000:00:10.0 00:04:17.686 Attached to 0000:00:11.0 00:04:17.686 Cleaning up... 00:04:17.686 00:04:17.686 real 0m0.176s 00:04:17.686 user 0m0.043s 00:04:17.686 sys 0m0.033s 00:04:17.686 19:20:07 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.686 ************************************ 00:04:17.686 END TEST env_dpdk_post_init 00:04:17.686 ************************************ 00:04:17.686 19:20:07 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:17.686 19:20:07 env -- common/autotest_common.sh@1142 -- # return 0 00:04:17.686 19:20:07 env -- env/env.sh@26 -- # uname 00:04:17.686 19:20:07 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:17.686 19:20:07 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:17.686 19:20:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.686 19:20:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.686 19:20:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.686 ************************************ 00:04:17.686 START TEST env_mem_callbacks 00:04:17.686 ************************************ 00:04:17.686 19:20:07 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:17.686 EAL: Detected CPU lcores: 10 00:04:17.686 EAL: Detected NUMA nodes: 1 00:04:17.686 EAL: Detected shared linkage of DPDK 00:04:17.686 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:17.686 EAL: Selected IOVA mode 'PA' 00:04:17.686 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:17.686 00:04:17.686 00:04:17.686 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.686 http://cunit.sourceforge.net/ 00:04:17.686 00:04:17.686 00:04:17.686 Suite: memory 00:04:17.686 Test: test ... 00:04:17.686 register 0x200000200000 2097152 00:04:17.686 malloc 3145728 00:04:17.686 register 0x200000400000 4194304 00:04:17.686 buf 0x200000500000 len 3145728 PASSED 00:04:17.686 malloc 64 00:04:17.686 buf 0x2000004fff40 len 64 PASSED 00:04:17.686 malloc 4194304 00:04:17.686 register 0x200000800000 6291456 00:04:17.686 buf 0x200000a00000 len 4194304 PASSED 00:04:17.686 free 0x200000500000 3145728 00:04:17.686 free 0x2000004fff40 64 00:04:17.686 unregister 0x200000400000 4194304 PASSED 00:04:17.686 free 0x200000a00000 4194304 00:04:17.686 unregister 0x200000800000 6291456 PASSED 00:04:17.686 malloc 8388608 00:04:17.686 register 0x200000400000 10485760 00:04:17.686 buf 0x200000600000 len 8388608 PASSED 00:04:17.686 free 0x200000600000 8388608 00:04:17.686 unregister 0x200000400000 10485760 PASSED 00:04:17.686 passed 00:04:17.686 00:04:17.686 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.686 suites 1 1 n/a 0 0 00:04:17.686 tests 1 1 1 0 0 00:04:17.686 asserts 15 15 15 0 n/a 00:04:17.686 00:04:17.686 Elapsed time = 0.009 seconds 00:04:17.686 00:04:17.686 real 0m0.144s 00:04:17.686 user 0m0.021s 00:04:17.686 sys 0m0.023s 00:04:17.686 19:20:07 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.686 ************************************ 00:04:17.686 END TEST env_mem_callbacks 00:04:17.686 19:20:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:17.686 ************************************ 00:04:17.945 19:20:07 env -- common/autotest_common.sh@1142 -- # return 0 00:04:17.945 00:04:17.945 real 0m1.852s 00:04:17.945 user 0m0.885s 00:04:17.945 sys 0m0.618s 00:04:17.946 19:20:07 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.946 19:20:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.946 ************************************ 00:04:17.946 END TEST env 00:04:17.946 ************************************ 00:04:17.946 19:20:07 -- common/autotest_common.sh@1142 -- # return 0 00:04:17.946 19:20:07 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:17.946 19:20:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.946 19:20:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.946 19:20:07 -- common/autotest_common.sh@10 -- # set +x 00:04:17.946 ************************************ 00:04:17.946 START TEST rpc 00:04:17.946 ************************************ 00:04:17.946 19:20:07 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:17.946 * Looking for test storage... 00:04:17.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:17.946 19:20:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60648 00:04:17.946 19:20:07 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:17.946 19:20:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.946 19:20:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60648 00:04:17.946 19:20:07 rpc -- common/autotest_common.sh@829 -- # '[' -z 60648 ']' 00:04:17.946 19:20:07 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.946 19:20:07 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:17.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.946 19:20:07 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.946 19:20:07 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:17.946 19:20:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.946 [2024-07-15 19:20:07.709486] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:04:17.946 [2024-07-15 19:20:07.710066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60648 ] 00:04:18.205 [2024-07-15 19:20:07.846431] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.205 [2024-07-15 19:20:07.915723] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:18.205 [2024-07-15 19:20:07.915777] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60648' to capture a snapshot of events at runtime. 00:04:18.205 [2024-07-15 19:20:07.915792] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:18.205 [2024-07-15 19:20:07.915802] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:18.205 [2024-07-15 19:20:07.915812] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60648 for offline analysis/debug. 00:04:18.205 [2024-07-15 19:20:07.915855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.464 19:20:08 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:18.464 19:20:08 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:18.464 19:20:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:18.464 19:20:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:18.464 19:20:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:18.464 19:20:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:18.464 19:20:08 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.464 19:20:08 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.464 19:20:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.464 ************************************ 00:04:18.464 START TEST rpc_integrity 00:04:18.464 ************************************ 00:04:18.464 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:18.464 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:18.464 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.464 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.464 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.464 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:18.464 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:18.464 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:18.464 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:18.464 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.464 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.464 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.464 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:18.464 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:18.464 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.464 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.464 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.464 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:18.464 { 00:04:18.464 "aliases": [ 00:04:18.464 "aa6ca054-e80a-47a3-b2d4-08642607b01a" 00:04:18.464 ], 00:04:18.464 "assigned_rate_limits": { 00:04:18.464 "r_mbytes_per_sec": 0, 00:04:18.464 "rw_ios_per_sec": 0, 00:04:18.464 "rw_mbytes_per_sec": 0, 00:04:18.464 "w_mbytes_per_sec": 0 00:04:18.464 }, 00:04:18.464 "block_size": 512, 00:04:18.464 "claimed": false, 00:04:18.464 "driver_specific": {}, 00:04:18.464 "memory_domains": [ 00:04:18.464 { 00:04:18.464 "dma_device_id": "system", 00:04:18.464 "dma_device_type": 1 00:04:18.464 }, 00:04:18.464 { 00:04:18.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.464 "dma_device_type": 2 00:04:18.464 } 00:04:18.464 ], 00:04:18.464 "name": "Malloc0", 00:04:18.464 "num_blocks": 16384, 00:04:18.464 "product_name": "Malloc disk", 00:04:18.464 "supported_io_types": { 00:04:18.464 "abort": true, 00:04:18.464 "compare": false, 00:04:18.464 "compare_and_write": false, 00:04:18.464 "copy": true, 00:04:18.464 "flush": true, 00:04:18.464 "get_zone_info": false, 00:04:18.464 "nvme_admin": false, 00:04:18.464 "nvme_io": false, 00:04:18.464 "nvme_io_md": false, 00:04:18.464 "nvme_iov_md": false, 00:04:18.464 "read": true, 00:04:18.464 "reset": true, 00:04:18.464 "seek_data": false, 00:04:18.464 "seek_hole": false, 00:04:18.464 "unmap": true, 00:04:18.464 "write": true, 00:04:18.464 "write_zeroes": true, 00:04:18.464 "zcopy": true, 00:04:18.464 "zone_append": false, 00:04:18.464 "zone_management": false 00:04:18.464 }, 00:04:18.464 "uuid": "aa6ca054-e80a-47a3-b2d4-08642607b01a", 00:04:18.464 "zoned": false 00:04:18.464 } 00:04:18.464 ]' 00:04:18.464 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:18.464 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:18.464 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:18.464 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.464 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.464 [2024-07-15 19:20:08.238513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:18.464 [2024-07-15 19:20:08.238574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:18.464 [2024-07-15 19:20:08.238597] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5fbc70 00:04:18.464 [2024-07-15 19:20:08.238609] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:18.464 [2024-07-15 19:20:08.240432] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:18.464 [2024-07-15 19:20:08.240476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:18.464 Passthru0 00:04:18.464 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.464 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:18.465 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.465 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.724 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.724 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:18.724 { 00:04:18.724 "aliases": [ 00:04:18.724 "aa6ca054-e80a-47a3-b2d4-08642607b01a" 00:04:18.724 ], 00:04:18.724 "assigned_rate_limits": { 00:04:18.724 "r_mbytes_per_sec": 0, 00:04:18.724 "rw_ios_per_sec": 0, 00:04:18.724 "rw_mbytes_per_sec": 0, 00:04:18.724 "w_mbytes_per_sec": 0 00:04:18.724 }, 00:04:18.724 "block_size": 512, 00:04:18.724 "claim_type": "exclusive_write", 00:04:18.724 "claimed": true, 00:04:18.724 "driver_specific": {}, 00:04:18.724 "memory_domains": [ 00:04:18.724 { 00:04:18.724 "dma_device_id": "system", 00:04:18.724 "dma_device_type": 1 00:04:18.724 }, 00:04:18.724 { 00:04:18.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.724 "dma_device_type": 2 00:04:18.724 } 00:04:18.724 ], 00:04:18.724 "name": "Malloc0", 00:04:18.724 "num_blocks": 16384, 00:04:18.724 "product_name": "Malloc disk", 00:04:18.724 "supported_io_types": { 00:04:18.724 "abort": true, 00:04:18.724 "compare": false, 00:04:18.724 "compare_and_write": false, 00:04:18.724 "copy": true, 00:04:18.724 "flush": true, 00:04:18.724 "get_zone_info": false, 00:04:18.724 "nvme_admin": false, 00:04:18.724 "nvme_io": false, 00:04:18.724 "nvme_io_md": false, 00:04:18.724 "nvme_iov_md": false, 00:04:18.724 "read": true, 00:04:18.724 "reset": true, 00:04:18.724 "seek_data": false, 00:04:18.724 "seek_hole": false, 00:04:18.724 "unmap": true, 00:04:18.724 "write": true, 00:04:18.724 "write_zeroes": true, 00:04:18.724 "zcopy": true, 00:04:18.724 "zone_append": false, 00:04:18.724 "zone_management": false 00:04:18.724 }, 00:04:18.724 "uuid": "aa6ca054-e80a-47a3-b2d4-08642607b01a", 00:04:18.724 "zoned": false 00:04:18.724 }, 00:04:18.724 { 00:04:18.724 "aliases": [ 00:04:18.724 "717cf980-a688-5a78-bcaf-988553563a36" 00:04:18.724 ], 00:04:18.724 "assigned_rate_limits": { 00:04:18.724 "r_mbytes_per_sec": 0, 00:04:18.724 "rw_ios_per_sec": 0, 00:04:18.724 "rw_mbytes_per_sec": 0, 00:04:18.724 "w_mbytes_per_sec": 0 00:04:18.724 }, 00:04:18.724 "block_size": 512, 00:04:18.724 "claimed": false, 00:04:18.724 "driver_specific": { 00:04:18.724 "passthru": { 00:04:18.724 "base_bdev_name": "Malloc0", 00:04:18.724 "name": "Passthru0" 00:04:18.724 } 00:04:18.724 }, 00:04:18.724 "memory_domains": [ 00:04:18.724 { 00:04:18.724 "dma_device_id": "system", 00:04:18.724 "dma_device_type": 1 00:04:18.724 }, 00:04:18.724 { 00:04:18.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.724 "dma_device_type": 2 00:04:18.724 } 00:04:18.724 ], 00:04:18.724 "name": "Passthru0", 00:04:18.724 "num_blocks": 16384, 00:04:18.724 "product_name": "passthru", 00:04:18.724 "supported_io_types": { 00:04:18.724 "abort": true, 00:04:18.724 "compare": false, 00:04:18.724 "compare_and_write": false, 00:04:18.724 "copy": true, 00:04:18.724 "flush": true, 00:04:18.724 "get_zone_info": false, 00:04:18.724 "nvme_admin": false, 00:04:18.724 "nvme_io": false, 00:04:18.724 "nvme_io_md": false, 00:04:18.724 "nvme_iov_md": false, 00:04:18.724 "read": true, 00:04:18.724 "reset": true, 00:04:18.724 "seek_data": false, 00:04:18.724 "seek_hole": false, 00:04:18.724 "unmap": true, 00:04:18.724 "write": true, 00:04:18.724 "write_zeroes": true, 00:04:18.724 "zcopy": true, 00:04:18.724 "zone_append": false, 00:04:18.724 "zone_management": false 00:04:18.724 }, 00:04:18.724 "uuid": "717cf980-a688-5a78-bcaf-988553563a36", 00:04:18.724 "zoned": false 00:04:18.724 } 00:04:18.724 ]' 00:04:18.724 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:18.724 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:18.724 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:18.724 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.724 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.724 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.724 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:18.724 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.724 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.724 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.724 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:18.724 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.724 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.724 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.724 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:18.724 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:18.724 19:20:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:18.724 00:04:18.724 real 0m0.301s 00:04:18.724 user 0m0.198s 00:04:18.724 sys 0m0.037s 00:04:18.724 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.724 ************************************ 00:04:18.724 END TEST rpc_integrity 00:04:18.724 ************************************ 00:04:18.724 19:20:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.724 19:20:08 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:18.724 19:20:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:18.724 19:20:08 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.724 19:20:08 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.724 19:20:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.724 ************************************ 00:04:18.724 START TEST rpc_plugins 00:04:18.724 ************************************ 00:04:18.724 19:20:08 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:18.724 19:20:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:18.724 19:20:08 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.724 19:20:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.724 19:20:08 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.724 19:20:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:18.724 19:20:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:18.724 19:20:08 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.724 19:20:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.724 19:20:08 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.724 19:20:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:18.724 { 00:04:18.724 "aliases": [ 00:04:18.724 "508c0d0c-0ec8-44d7-8dc2-762de338731a" 00:04:18.724 ], 00:04:18.724 "assigned_rate_limits": { 00:04:18.724 "r_mbytes_per_sec": 0, 00:04:18.724 "rw_ios_per_sec": 0, 00:04:18.724 "rw_mbytes_per_sec": 0, 00:04:18.724 "w_mbytes_per_sec": 0 00:04:18.724 }, 00:04:18.724 "block_size": 4096, 00:04:18.724 "claimed": false, 00:04:18.724 "driver_specific": {}, 00:04:18.724 "memory_domains": [ 00:04:18.724 { 00:04:18.724 "dma_device_id": "system", 00:04:18.724 "dma_device_type": 1 00:04:18.724 }, 00:04:18.724 { 00:04:18.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.724 "dma_device_type": 2 00:04:18.724 } 00:04:18.725 ], 00:04:18.725 "name": "Malloc1", 00:04:18.725 "num_blocks": 256, 00:04:18.725 "product_name": "Malloc disk", 00:04:18.725 "supported_io_types": { 00:04:18.725 "abort": true, 00:04:18.725 "compare": false, 00:04:18.725 "compare_and_write": false, 00:04:18.725 "copy": true, 00:04:18.725 "flush": true, 00:04:18.725 "get_zone_info": false, 00:04:18.725 "nvme_admin": false, 00:04:18.725 "nvme_io": false, 00:04:18.725 "nvme_io_md": false, 00:04:18.725 "nvme_iov_md": false, 00:04:18.725 "read": true, 00:04:18.725 "reset": true, 00:04:18.725 "seek_data": false, 00:04:18.725 "seek_hole": false, 00:04:18.725 "unmap": true, 00:04:18.725 "write": true, 00:04:18.725 "write_zeroes": true, 00:04:18.725 "zcopy": true, 00:04:18.725 "zone_append": false, 00:04:18.725 "zone_management": false 00:04:18.725 }, 00:04:18.725 "uuid": "508c0d0c-0ec8-44d7-8dc2-762de338731a", 00:04:18.725 "zoned": false 00:04:18.725 } 00:04:18.725 ]' 00:04:18.725 19:20:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:18.725 19:20:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:18.725 19:20:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:18.725 19:20:08 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.725 19:20:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.983 19:20:08 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.983 19:20:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:18.983 19:20:08 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.983 19:20:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.983 19:20:08 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.983 19:20:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:18.983 19:20:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:18.983 19:20:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:18.983 00:04:18.983 real 0m0.156s 00:04:18.983 user 0m0.101s 00:04:18.983 sys 0m0.017s 00:04:18.983 19:20:08 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.983 ************************************ 00:04:18.983 END TEST rpc_plugins 00:04:18.983 ************************************ 00:04:18.983 19:20:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.983 19:20:08 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:18.983 19:20:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:18.983 19:20:08 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.983 19:20:08 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.983 19:20:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.983 ************************************ 00:04:18.983 START TEST rpc_trace_cmd_test 00:04:18.983 ************************************ 00:04:18.983 19:20:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:18.983 19:20:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:18.983 19:20:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:18.983 19:20:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.983 19:20:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:18.983 19:20:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.983 19:20:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:18.983 "bdev": { 00:04:18.983 "mask": "0x8", 00:04:18.983 "tpoint_mask": "0xffffffffffffffff" 00:04:18.983 }, 00:04:18.983 "bdev_nvme": { 00:04:18.983 "mask": "0x4000", 00:04:18.983 "tpoint_mask": "0x0" 00:04:18.983 }, 00:04:18.983 "blobfs": { 00:04:18.983 "mask": "0x80", 00:04:18.983 "tpoint_mask": "0x0" 00:04:18.983 }, 00:04:18.983 "dsa": { 00:04:18.983 "mask": "0x200", 00:04:18.983 "tpoint_mask": "0x0" 00:04:18.983 }, 00:04:18.983 "ftl": { 00:04:18.983 "mask": "0x40", 00:04:18.983 "tpoint_mask": "0x0" 00:04:18.983 }, 00:04:18.983 "iaa": { 00:04:18.983 "mask": "0x1000", 00:04:18.983 "tpoint_mask": "0x0" 00:04:18.983 }, 00:04:18.983 "iscsi_conn": { 00:04:18.983 "mask": "0x2", 00:04:18.983 "tpoint_mask": "0x0" 00:04:18.983 }, 00:04:18.983 "nvme_pcie": { 00:04:18.983 "mask": "0x800", 00:04:18.983 "tpoint_mask": "0x0" 00:04:18.983 }, 00:04:18.983 "nvme_tcp": { 00:04:18.983 "mask": "0x2000", 00:04:18.983 "tpoint_mask": "0x0" 00:04:18.983 }, 00:04:18.983 "nvmf_rdma": { 00:04:18.983 "mask": "0x10", 00:04:18.983 "tpoint_mask": "0x0" 00:04:18.983 }, 00:04:18.983 "nvmf_tcp": { 00:04:18.983 "mask": "0x20", 00:04:18.983 "tpoint_mask": "0x0" 00:04:18.983 }, 00:04:18.983 "scsi": { 00:04:18.983 "mask": "0x4", 00:04:18.983 "tpoint_mask": "0x0" 00:04:18.983 }, 00:04:18.983 "sock": { 00:04:18.983 "mask": "0x8000", 00:04:18.983 "tpoint_mask": "0x0" 00:04:18.983 }, 00:04:18.983 "thread": { 00:04:18.983 "mask": "0x400", 00:04:18.983 "tpoint_mask": "0x0" 00:04:18.983 }, 00:04:18.983 "tpoint_group_mask": "0x8", 00:04:18.984 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60648" 00:04:18.984 }' 00:04:18.984 19:20:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:18.984 19:20:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:18.984 19:20:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:18.984 19:20:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:18.984 19:20:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:19.242 19:20:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:19.242 19:20:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:19.242 19:20:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:19.242 19:20:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:19.242 19:20:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:19.242 00:04:19.242 real 0m0.243s 00:04:19.242 user 0m0.213s 00:04:19.242 sys 0m0.020s 00:04:19.242 19:20:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.242 19:20:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:19.242 ************************************ 00:04:19.242 END TEST rpc_trace_cmd_test 00:04:19.242 ************************************ 00:04:19.242 19:20:08 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:19.242 19:20:08 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:19.242 19:20:08 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:19.242 19:20:08 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.242 19:20:08 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.242 19:20:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.242 ************************************ 00:04:19.242 START TEST go_rpc 00:04:19.242 ************************************ 00:04:19.242 19:20:08 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:04:19.242 19:20:08 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:19.242 19:20:08 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:19.242 19:20:08 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:04:19.242 19:20:08 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:19.242 19:20:09 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:19.242 19:20:09 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.242 19:20:09 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.242 19:20:09 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.242 19:20:09 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:19.242 19:20:09 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:19.242 19:20:09 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["c42539d0-32fa-4204-a4b4-05f34044fbc5"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"c42539d0-32fa-4204-a4b4-05f34044fbc5","zoned":false}]' 00:04:19.242 19:20:09 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:04:19.501 19:20:09 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:19.501 19:20:09 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:19.501 19:20:09 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.501 19:20:09 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.501 19:20:09 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.501 19:20:09 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:19.501 19:20:09 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:19.502 19:20:09 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:04:19.502 19:20:09 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:19.502 00:04:19.502 real 0m0.217s 00:04:19.502 user 0m0.147s 00:04:19.502 sys 0m0.036s 00:04:19.502 19:20:09 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.502 ************************************ 00:04:19.502 END TEST go_rpc 00:04:19.502 ************************************ 00:04:19.502 19:20:09 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.502 19:20:09 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:19.502 19:20:09 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:19.502 19:20:09 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:19.502 19:20:09 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.502 19:20:09 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.502 19:20:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.502 ************************************ 00:04:19.502 START TEST rpc_daemon_integrity 00:04:19.502 ************************************ 00:04:19.502 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:19.502 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:19.502 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.502 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.502 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.502 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:19.502 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:19.502 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:19.502 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:19.502 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.502 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.502 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.502 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:19.502 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:19.502 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.502 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.761 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.761 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:19.761 { 00:04:19.761 "aliases": [ 00:04:19.761 "b60bd761-32d1-4420-a707-2270c6371e75" 00:04:19.761 ], 00:04:19.761 "assigned_rate_limits": { 00:04:19.761 "r_mbytes_per_sec": 0, 00:04:19.761 "rw_ios_per_sec": 0, 00:04:19.761 "rw_mbytes_per_sec": 0, 00:04:19.761 "w_mbytes_per_sec": 0 00:04:19.761 }, 00:04:19.761 "block_size": 512, 00:04:19.761 "claimed": false, 00:04:19.761 "driver_specific": {}, 00:04:19.761 "memory_domains": [ 00:04:19.761 { 00:04:19.761 "dma_device_id": "system", 00:04:19.761 "dma_device_type": 1 00:04:19.761 }, 00:04:19.761 { 00:04:19.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.761 "dma_device_type": 2 00:04:19.761 } 00:04:19.761 ], 00:04:19.761 "name": "Malloc3", 00:04:19.761 "num_blocks": 16384, 00:04:19.761 "product_name": "Malloc disk", 00:04:19.761 "supported_io_types": { 00:04:19.761 "abort": true, 00:04:19.761 "compare": false, 00:04:19.761 "compare_and_write": false, 00:04:19.761 "copy": true, 00:04:19.761 "flush": true, 00:04:19.761 "get_zone_info": false, 00:04:19.761 "nvme_admin": false, 00:04:19.761 "nvme_io": false, 00:04:19.761 "nvme_io_md": false, 00:04:19.761 "nvme_iov_md": false, 00:04:19.761 "read": true, 00:04:19.761 "reset": true, 00:04:19.761 "seek_data": false, 00:04:19.761 "seek_hole": false, 00:04:19.761 "unmap": true, 00:04:19.761 "write": true, 00:04:19.761 "write_zeroes": true, 00:04:19.761 "zcopy": true, 00:04:19.761 "zone_append": false, 00:04:19.761 "zone_management": false 00:04:19.761 }, 00:04:19.761 "uuid": "b60bd761-32d1-4420-a707-2270c6371e75", 00:04:19.761 "zoned": false 00:04:19.761 } 00:04:19.761 ]' 00:04:19.761 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:19.761 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:19.761 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:19.761 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.761 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.761 [2024-07-15 19:20:09.366959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:19.761 [2024-07-15 19:20:09.367016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:19.761 [2024-07-15 19:20:09.367036] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x73d290 00:04:19.761 [2024-07-15 19:20:09.367046] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:19.761 [2024-07-15 19:20:09.368500] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:19.761 [2024-07-15 19:20:09.368535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:19.761 Passthru0 00:04:19.761 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.761 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:19.761 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.761 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.761 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.761 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:19.761 { 00:04:19.761 "aliases": [ 00:04:19.761 "b60bd761-32d1-4420-a707-2270c6371e75" 00:04:19.761 ], 00:04:19.761 "assigned_rate_limits": { 00:04:19.761 "r_mbytes_per_sec": 0, 00:04:19.761 "rw_ios_per_sec": 0, 00:04:19.761 "rw_mbytes_per_sec": 0, 00:04:19.761 "w_mbytes_per_sec": 0 00:04:19.761 }, 00:04:19.761 "block_size": 512, 00:04:19.761 "claim_type": "exclusive_write", 00:04:19.761 "claimed": true, 00:04:19.761 "driver_specific": {}, 00:04:19.761 "memory_domains": [ 00:04:19.761 { 00:04:19.761 "dma_device_id": "system", 00:04:19.761 "dma_device_type": 1 00:04:19.761 }, 00:04:19.761 { 00:04:19.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.761 "dma_device_type": 2 00:04:19.761 } 00:04:19.761 ], 00:04:19.761 "name": "Malloc3", 00:04:19.761 "num_blocks": 16384, 00:04:19.761 "product_name": "Malloc disk", 00:04:19.761 "supported_io_types": { 00:04:19.761 "abort": true, 00:04:19.761 "compare": false, 00:04:19.761 "compare_and_write": false, 00:04:19.761 "copy": true, 00:04:19.761 "flush": true, 00:04:19.761 "get_zone_info": false, 00:04:19.761 "nvme_admin": false, 00:04:19.761 "nvme_io": false, 00:04:19.761 "nvme_io_md": false, 00:04:19.761 "nvme_iov_md": false, 00:04:19.761 "read": true, 00:04:19.761 "reset": true, 00:04:19.761 "seek_data": false, 00:04:19.761 "seek_hole": false, 00:04:19.761 "unmap": true, 00:04:19.761 "write": true, 00:04:19.761 "write_zeroes": true, 00:04:19.761 "zcopy": true, 00:04:19.761 "zone_append": false, 00:04:19.761 "zone_management": false 00:04:19.761 }, 00:04:19.761 "uuid": "b60bd761-32d1-4420-a707-2270c6371e75", 00:04:19.761 "zoned": false 00:04:19.761 }, 00:04:19.761 { 00:04:19.761 "aliases": [ 00:04:19.761 "ca0917a8-ebff-5bbf-a85e-508b86f9cdd8" 00:04:19.761 ], 00:04:19.761 "assigned_rate_limits": { 00:04:19.761 "r_mbytes_per_sec": 0, 00:04:19.761 "rw_ios_per_sec": 0, 00:04:19.761 "rw_mbytes_per_sec": 0, 00:04:19.761 "w_mbytes_per_sec": 0 00:04:19.761 }, 00:04:19.761 "block_size": 512, 00:04:19.761 "claimed": false, 00:04:19.761 "driver_specific": { 00:04:19.761 "passthru": { 00:04:19.762 "base_bdev_name": "Malloc3", 00:04:19.762 "name": "Passthru0" 00:04:19.762 } 00:04:19.762 }, 00:04:19.762 "memory_domains": [ 00:04:19.762 { 00:04:19.762 "dma_device_id": "system", 00:04:19.762 "dma_device_type": 1 00:04:19.762 }, 00:04:19.762 { 00:04:19.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.762 "dma_device_type": 2 00:04:19.762 } 00:04:19.762 ], 00:04:19.762 "name": "Passthru0", 00:04:19.762 "num_blocks": 16384, 00:04:19.762 "product_name": "passthru", 00:04:19.762 "supported_io_types": { 00:04:19.762 "abort": true, 00:04:19.762 "compare": false, 00:04:19.762 "compare_and_write": false, 00:04:19.762 "copy": true, 00:04:19.762 "flush": true, 00:04:19.762 "get_zone_info": false, 00:04:19.762 "nvme_admin": false, 00:04:19.762 "nvme_io": false, 00:04:19.762 "nvme_io_md": false, 00:04:19.762 "nvme_iov_md": false, 00:04:19.762 "read": true, 00:04:19.762 "reset": true, 00:04:19.762 "seek_data": false, 00:04:19.762 "seek_hole": false, 00:04:19.762 "unmap": true, 00:04:19.762 "write": true, 00:04:19.762 "write_zeroes": true, 00:04:19.762 "zcopy": true, 00:04:19.762 "zone_append": false, 00:04:19.762 "zone_management": false 00:04:19.762 }, 00:04:19.762 "uuid": "ca0917a8-ebff-5bbf-a85e-508b86f9cdd8", 00:04:19.762 "zoned": false 00:04:19.762 } 00:04:19.762 ]' 00:04:19.762 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:19.762 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:19.762 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:19.762 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.762 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.762 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.762 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:19.762 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.762 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.762 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.762 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:19.762 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.762 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.762 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.762 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:19.762 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:19.762 19:20:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:19.762 00:04:19.762 real 0m0.334s 00:04:19.762 user 0m0.220s 00:04:19.762 sys 0m0.042s 00:04:19.762 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.762 ************************************ 00:04:19.762 END TEST rpc_daemon_integrity 00:04:19.762 19:20:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.762 ************************************ 00:04:20.021 19:20:09 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:20.021 19:20:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:20.021 19:20:09 rpc -- rpc/rpc.sh@84 -- # killprocess 60648 00:04:20.021 19:20:09 rpc -- common/autotest_common.sh@948 -- # '[' -z 60648 ']' 00:04:20.021 19:20:09 rpc -- common/autotest_common.sh@952 -- # kill -0 60648 00:04:20.021 19:20:09 rpc -- common/autotest_common.sh@953 -- # uname 00:04:20.021 19:20:09 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:20.021 19:20:09 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60648 00:04:20.021 19:20:09 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:20.021 19:20:09 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:20.021 killing process with pid 60648 00:04:20.021 19:20:09 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60648' 00:04:20.021 19:20:09 rpc -- common/autotest_common.sh@967 -- # kill 60648 00:04:20.021 19:20:09 rpc -- common/autotest_common.sh@972 -- # wait 60648 00:04:20.279 00:04:20.279 real 0m2.308s 00:04:20.279 user 0m3.201s 00:04:20.279 sys 0m0.571s 00:04:20.279 19:20:09 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.279 19:20:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.279 ************************************ 00:04:20.279 END TEST rpc 00:04:20.279 ************************************ 00:04:20.279 19:20:09 -- common/autotest_common.sh@1142 -- # return 0 00:04:20.279 19:20:09 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:20.279 19:20:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.279 19:20:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.279 19:20:09 -- common/autotest_common.sh@10 -- # set +x 00:04:20.279 ************************************ 00:04:20.279 START TEST skip_rpc 00:04:20.279 ************************************ 00:04:20.279 19:20:09 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:20.280 * Looking for test storage... 00:04:20.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:20.280 19:20:09 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:20.280 19:20:09 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:20.280 19:20:09 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:20.280 19:20:09 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.280 19:20:09 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.280 19:20:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.280 ************************************ 00:04:20.280 START TEST skip_rpc 00:04:20.280 ************************************ 00:04:20.280 19:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:20.280 19:20:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60896 00:04:20.280 19:20:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:20.280 19:20:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.280 19:20:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:20.280 [2024-07-15 19:20:10.055149] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:04:20.280 [2024-07-15 19:20:10.055241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60896 ] 00:04:20.543 [2024-07-15 19:20:10.197640] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.543 [2024-07-15 19:20:10.268788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.825 19:20:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:25.825 19:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:25.825 19:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:25.825 19:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:25.825 19:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:25.825 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:25.825 19:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:25.825 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:25.825 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.825 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.825 2024/07/15 19:20:15 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:04:25.825 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:25.825 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:25.825 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:25.825 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:25.825 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:25.825 19:20:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:25.825 19:20:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60896 00:04:25.826 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 60896 ']' 00:04:25.826 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 60896 00:04:25.826 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:25.826 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:25.826 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60896 00:04:25.826 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:25.826 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:25.826 killing process with pid 60896 00:04:25.826 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60896' 00:04:25.826 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 60896 00:04:25.826 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 60896 00:04:25.826 00:04:25.826 real 0m5.308s 00:04:25.826 user 0m5.022s 00:04:25.826 sys 0m0.193s 00:04:25.826 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.826 19:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.826 ************************************ 00:04:25.826 END TEST skip_rpc 00:04:25.826 ************************************ 00:04:25.826 19:20:15 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:25.826 19:20:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:25.826 19:20:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.826 19:20:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.826 19:20:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.826 ************************************ 00:04:25.826 START TEST skip_rpc_with_json 00:04:25.826 ************************************ 00:04:25.826 19:20:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:25.826 19:20:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:25.826 19:20:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60983 00:04:25.826 19:20:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:25.826 19:20:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.826 19:20:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 60983 00:04:25.826 19:20:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 60983 ']' 00:04:25.826 19:20:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.826 19:20:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:25.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.826 19:20:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.826 19:20:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:25.826 19:20:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.826 [2024-07-15 19:20:15.436270] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:04:25.826 [2024-07-15 19:20:15.436408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60983 ] 00:04:25.826 [2024-07-15 19:20:15.569314] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.085 [2024-07-15 19:20:15.629694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.652 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:26.652 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:26.652 19:20:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:26.652 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.652 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.652 [2024-07-15 19:20:16.416389] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:26.652 2024/07/15 19:20:16 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:04:26.652 request: 00:04:26.652 { 00:04:26.652 "method": "nvmf_get_transports", 00:04:26.652 "params": { 00:04:26.652 "trtype": "tcp" 00:04:26.652 } 00:04:26.652 } 00:04:26.652 Got JSON-RPC error response 00:04:26.652 GoRPCClient: error on JSON-RPC call 00:04:26.652 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:26.652 19:20:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:26.652 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.652 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.652 [2024-07-15 19:20:16.428485] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:26.652 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.652 19:20:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:26.652 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.652 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.912 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.912 19:20:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:26.912 { 00:04:26.912 "subsystems": [ 00:04:26.912 { 00:04:26.912 "subsystem": "keyring", 00:04:26.912 "config": [] 00:04:26.912 }, 00:04:26.912 { 00:04:26.912 "subsystem": "iobuf", 00:04:26.912 "config": [ 00:04:26.912 { 00:04:26.912 "method": "iobuf_set_options", 00:04:26.912 "params": { 00:04:26.912 "large_bufsize": 135168, 00:04:26.912 "large_pool_count": 1024, 00:04:26.912 "small_bufsize": 8192, 00:04:26.912 "small_pool_count": 8192 00:04:26.912 } 00:04:26.912 } 00:04:26.912 ] 00:04:26.912 }, 00:04:26.912 { 00:04:26.912 "subsystem": "sock", 00:04:26.912 "config": [ 00:04:26.912 { 00:04:26.912 "method": "sock_set_default_impl", 00:04:26.912 "params": { 00:04:26.912 "impl_name": "posix" 00:04:26.912 } 00:04:26.912 }, 00:04:26.912 { 00:04:26.912 "method": "sock_impl_set_options", 00:04:26.912 "params": { 00:04:26.912 "enable_ktls": false, 00:04:26.912 "enable_placement_id": 0, 00:04:26.912 "enable_quickack": false, 00:04:26.912 "enable_recv_pipe": true, 00:04:26.912 "enable_zerocopy_send_client": false, 00:04:26.912 "enable_zerocopy_send_server": true, 00:04:26.912 "impl_name": "ssl", 00:04:26.912 "recv_buf_size": 4096, 00:04:26.912 "send_buf_size": 4096, 00:04:26.912 "tls_version": 0, 00:04:26.912 "zerocopy_threshold": 0 00:04:26.912 } 00:04:26.912 }, 00:04:26.912 { 00:04:26.912 "method": "sock_impl_set_options", 00:04:26.912 "params": { 00:04:26.912 "enable_ktls": false, 00:04:26.912 "enable_placement_id": 0, 00:04:26.912 "enable_quickack": false, 00:04:26.912 "enable_recv_pipe": true, 00:04:26.912 "enable_zerocopy_send_client": false, 00:04:26.912 "enable_zerocopy_send_server": true, 00:04:26.912 "impl_name": "posix", 00:04:26.912 "recv_buf_size": 2097152, 00:04:26.912 "send_buf_size": 2097152, 00:04:26.912 "tls_version": 0, 00:04:26.912 "zerocopy_threshold": 0 00:04:26.912 } 00:04:26.912 } 00:04:26.912 ] 00:04:26.912 }, 00:04:26.912 { 00:04:26.912 "subsystem": "vmd", 00:04:26.912 "config": [] 00:04:26.912 }, 00:04:26.912 { 00:04:26.912 "subsystem": "accel", 00:04:26.912 "config": [ 00:04:26.912 { 00:04:26.912 "method": "accel_set_options", 00:04:26.912 "params": { 00:04:26.912 "buf_count": 2048, 00:04:26.912 "large_cache_size": 16, 00:04:26.912 "sequence_count": 2048, 00:04:26.912 "small_cache_size": 128, 00:04:26.912 "task_count": 2048 00:04:26.912 } 00:04:26.912 } 00:04:26.912 ] 00:04:26.912 }, 00:04:26.912 { 00:04:26.912 "subsystem": "bdev", 00:04:26.912 "config": [ 00:04:26.912 { 00:04:26.912 "method": "bdev_set_options", 00:04:26.912 "params": { 00:04:26.912 "bdev_auto_examine": true, 00:04:26.912 "bdev_io_cache_size": 256, 00:04:26.912 "bdev_io_pool_size": 65535, 00:04:26.912 "iobuf_large_cache_size": 16, 00:04:26.912 "iobuf_small_cache_size": 128 00:04:26.912 } 00:04:26.912 }, 00:04:26.912 { 00:04:26.912 "method": "bdev_raid_set_options", 00:04:26.912 "params": { 00:04:26.912 "process_window_size_kb": 1024 00:04:26.912 } 00:04:26.912 }, 00:04:26.912 { 00:04:26.912 "method": "bdev_iscsi_set_options", 00:04:26.912 "params": { 00:04:26.912 "timeout_sec": 30 00:04:26.912 } 00:04:26.912 }, 00:04:26.912 { 00:04:26.912 "method": "bdev_nvme_set_options", 00:04:26.912 "params": { 00:04:26.912 "action_on_timeout": "none", 00:04:26.912 "allow_accel_sequence": false, 00:04:26.912 "arbitration_burst": 0, 00:04:26.912 "bdev_retry_count": 3, 00:04:26.912 "ctrlr_loss_timeout_sec": 0, 00:04:26.912 "delay_cmd_submit": true, 00:04:26.912 "dhchap_dhgroups": [ 00:04:26.912 "null", 00:04:26.912 "ffdhe2048", 00:04:26.912 "ffdhe3072", 00:04:26.912 "ffdhe4096", 00:04:26.912 "ffdhe6144", 00:04:26.912 "ffdhe8192" 00:04:26.912 ], 00:04:26.912 "dhchap_digests": [ 00:04:26.912 "sha256", 00:04:26.912 "sha384", 00:04:26.912 "sha512" 00:04:26.912 ], 00:04:26.912 "disable_auto_failback": false, 00:04:26.912 "fast_io_fail_timeout_sec": 0, 00:04:26.912 "generate_uuids": false, 00:04:26.912 "high_priority_weight": 0, 00:04:26.912 "io_path_stat": false, 00:04:26.912 "io_queue_requests": 0, 00:04:26.912 "keep_alive_timeout_ms": 10000, 00:04:26.912 "low_priority_weight": 0, 00:04:26.912 "medium_priority_weight": 0, 00:04:26.912 "nvme_adminq_poll_period_us": 10000, 00:04:26.912 "nvme_error_stat": false, 00:04:26.912 "nvme_ioq_poll_period_us": 0, 00:04:26.912 "rdma_cm_event_timeout_ms": 0, 00:04:26.912 "rdma_max_cq_size": 0, 00:04:26.912 "rdma_srq_size": 0, 00:04:26.912 "reconnect_delay_sec": 0, 00:04:26.912 "timeout_admin_us": 0, 00:04:26.912 "timeout_us": 0, 00:04:26.912 "transport_ack_timeout": 0, 00:04:26.912 "transport_retry_count": 4, 00:04:26.912 "transport_tos": 0 00:04:26.912 } 00:04:26.912 }, 00:04:26.912 { 00:04:26.912 "method": "bdev_nvme_set_hotplug", 00:04:26.912 "params": { 00:04:26.912 "enable": false, 00:04:26.912 "period_us": 100000 00:04:26.912 } 00:04:26.912 }, 00:04:26.912 { 00:04:26.912 "method": "bdev_wait_for_examine" 00:04:26.912 } 00:04:26.912 ] 00:04:26.912 }, 00:04:26.912 { 00:04:26.912 "subsystem": "scsi", 00:04:26.912 "config": null 00:04:26.912 }, 00:04:26.912 { 00:04:26.912 "subsystem": "scheduler", 00:04:26.912 "config": [ 00:04:26.912 { 00:04:26.912 "method": "framework_set_scheduler", 00:04:26.912 "params": { 00:04:26.912 "name": "static" 00:04:26.912 } 00:04:26.912 } 00:04:26.912 ] 00:04:26.912 }, 00:04:26.912 { 00:04:26.912 "subsystem": "vhost_scsi", 00:04:26.912 "config": [] 00:04:26.912 }, 00:04:26.912 { 00:04:26.912 "subsystem": "vhost_blk", 00:04:26.912 "config": [] 00:04:26.912 }, 00:04:26.912 { 00:04:26.912 "subsystem": "ublk", 00:04:26.912 "config": [] 00:04:26.912 }, 00:04:26.912 { 00:04:26.912 "subsystem": "nbd", 00:04:26.912 "config": [] 00:04:26.912 }, 00:04:26.912 { 00:04:26.912 "subsystem": "nvmf", 00:04:26.913 "config": [ 00:04:26.913 { 00:04:26.913 "method": "nvmf_set_config", 00:04:26.913 "params": { 00:04:26.913 "admin_cmd_passthru": { 00:04:26.913 "identify_ctrlr": false 00:04:26.913 }, 00:04:26.913 "discovery_filter": "match_any" 00:04:26.913 } 00:04:26.913 }, 00:04:26.913 { 00:04:26.913 "method": "nvmf_set_max_subsystems", 00:04:26.913 "params": { 00:04:26.913 "max_subsystems": 1024 00:04:26.913 } 00:04:26.913 }, 00:04:26.913 { 00:04:26.913 "method": "nvmf_set_crdt", 00:04:26.913 "params": { 00:04:26.913 "crdt1": 0, 00:04:26.913 "crdt2": 0, 00:04:26.913 "crdt3": 0 00:04:26.913 } 00:04:26.913 }, 00:04:26.913 { 00:04:26.913 "method": "nvmf_create_transport", 00:04:26.913 "params": { 00:04:26.913 "abort_timeout_sec": 1, 00:04:26.913 "ack_timeout": 0, 00:04:26.913 "buf_cache_size": 4294967295, 00:04:26.913 "c2h_success": true, 00:04:26.913 "data_wr_pool_size": 0, 00:04:26.913 "dif_insert_or_strip": false, 00:04:26.913 "in_capsule_data_size": 4096, 00:04:26.913 "io_unit_size": 131072, 00:04:26.913 "max_aq_depth": 128, 00:04:26.913 "max_io_qpairs_per_ctrlr": 127, 00:04:26.913 "max_io_size": 131072, 00:04:26.913 "max_queue_depth": 128, 00:04:26.913 "num_shared_buffers": 511, 00:04:26.913 "sock_priority": 0, 00:04:26.913 "trtype": "TCP", 00:04:26.913 "zcopy": false 00:04:26.913 } 00:04:26.913 } 00:04:26.913 ] 00:04:26.913 }, 00:04:26.913 { 00:04:26.913 "subsystem": "iscsi", 00:04:26.913 "config": [ 00:04:26.913 { 00:04:26.913 "method": "iscsi_set_options", 00:04:26.913 "params": { 00:04:26.913 "allow_duplicated_isid": false, 00:04:26.913 "chap_group": 0, 00:04:26.913 "data_out_pool_size": 2048, 00:04:26.913 "default_time2retain": 20, 00:04:26.913 "default_time2wait": 2, 00:04:26.913 "disable_chap": false, 00:04:26.913 "error_recovery_level": 0, 00:04:26.913 "first_burst_length": 8192, 00:04:26.913 "immediate_data": true, 00:04:26.913 "immediate_data_pool_size": 16384, 00:04:26.913 "max_connections_per_session": 2, 00:04:26.913 "max_large_datain_per_connection": 64, 00:04:26.913 "max_queue_depth": 64, 00:04:26.913 "max_r2t_per_connection": 4, 00:04:26.913 "max_sessions": 128, 00:04:26.913 "mutual_chap": false, 00:04:26.913 "node_base": "iqn.2016-06.io.spdk", 00:04:26.913 "nop_in_interval": 30, 00:04:26.913 "nop_timeout": 60, 00:04:26.913 "pdu_pool_size": 36864, 00:04:26.913 "require_chap": false 00:04:26.913 } 00:04:26.913 } 00:04:26.913 ] 00:04:26.913 } 00:04:26.913 ] 00:04:26.913 } 00:04:26.913 19:20:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:26.913 19:20:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 60983 00:04:26.913 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 60983 ']' 00:04:26.913 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 60983 00:04:26.913 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:26.913 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:26.913 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60983 00:04:26.913 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:26.913 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:26.913 killing process with pid 60983 00:04:26.913 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60983' 00:04:26.913 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 60983 00:04:26.913 19:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 60983 00:04:27.172 19:20:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61027 00:04:27.172 19:20:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:27.172 19:20:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:32.436 19:20:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 61027 00:04:32.436 19:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61027 ']' 00:04:32.436 19:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61027 00:04:32.436 19:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:32.436 19:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:32.436 19:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61027 00:04:32.436 19:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:32.436 19:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:32.436 killing process with pid 61027 00:04:32.436 19:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61027' 00:04:32.436 19:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61027 00:04:32.436 19:20:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61027 00:04:32.436 19:20:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:32.436 19:20:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:32.436 00:04:32.436 real 0m6.820s 00:04:32.436 user 0m6.761s 00:04:32.436 sys 0m0.457s 00:04:32.436 19:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.436 19:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.436 ************************************ 00:04:32.436 END TEST skip_rpc_with_json 00:04:32.436 ************************************ 00:04:32.436 19:20:22 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:32.436 19:20:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:32.436 19:20:22 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.436 19:20:22 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.436 19:20:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.436 ************************************ 00:04:32.436 START TEST skip_rpc_with_delay 00:04:32.436 ************************************ 00:04:32.436 19:20:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:32.436 19:20:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:32.436 19:20:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:32.436 19:20:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:32.436 19:20:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.436 19:20:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.436 19:20:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.436 19:20:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.436 19:20:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.437 19:20:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.437 19:20:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.437 19:20:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:32.437 19:20:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:32.695 [2024-07-15 19:20:22.304554] app.c: 837:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:32.695 [2024-07-15 19:20:22.304709] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:32.695 19:20:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:32.695 19:20:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:32.695 19:20:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:32.695 19:20:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:32.695 00:04:32.695 real 0m0.103s 00:04:32.695 user 0m0.074s 00:04:32.695 sys 0m0.028s 00:04:32.695 19:20:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.695 ************************************ 00:04:32.695 19:20:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:32.695 END TEST skip_rpc_with_delay 00:04:32.695 ************************************ 00:04:32.695 19:20:22 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:32.695 19:20:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:32.695 19:20:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:32.695 19:20:22 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:32.695 19:20:22 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.695 19:20:22 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.695 19:20:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.695 ************************************ 00:04:32.695 START TEST exit_on_failed_rpc_init 00:04:32.695 ************************************ 00:04:32.695 19:20:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:32.695 19:20:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61132 00:04:32.695 19:20:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61132 00:04:32.695 19:20:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:32.695 19:20:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 61132 ']' 00:04:32.695 19:20:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.695 19:20:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.695 19:20:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.695 19:20:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.695 19:20:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:32.695 [2024-07-15 19:20:22.454813] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:04:32.695 [2024-07-15 19:20:22.454928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61132 ] 00:04:32.954 [2024-07-15 19:20:22.591496] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.954 [2024-07-15 19:20:22.660921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.886 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.886 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:33.886 19:20:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.886 19:20:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:33.886 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:33.886 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:33.886 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.886 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:33.886 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.886 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:33.886 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.886 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:33.886 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.886 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:33.886 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:33.886 [2024-07-15 19:20:23.489488] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:04:33.886 [2024-07-15 19:20:23.489602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61162 ] 00:04:33.886 [2024-07-15 19:20:23.630417] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.143 [2024-07-15 19:20:23.699896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.143 [2024-07-15 19:20:23.699990] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:34.143 [2024-07-15 19:20:23.700006] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:34.143 [2024-07-15 19:20:23.700016] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:34.143 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:34.143 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:34.143 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:34.143 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:34.143 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:34.143 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:34.143 19:20:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:34.143 19:20:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61132 00:04:34.143 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 61132 ']' 00:04:34.143 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 61132 00:04:34.143 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:34.143 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:34.143 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61132 00:04:34.143 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:34.143 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:34.143 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61132' 00:04:34.143 killing process with pid 61132 00:04:34.143 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 61132 00:04:34.143 19:20:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 61132 00:04:34.399 00:04:34.399 real 0m1.688s 00:04:34.399 user 0m2.114s 00:04:34.399 sys 0m0.284s 00:04:34.399 19:20:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.399 ************************************ 00:04:34.399 END TEST exit_on_failed_rpc_init 00:04:34.399 ************************************ 00:04:34.399 19:20:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:34.399 19:20:24 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:34.399 19:20:24 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:34.399 00:04:34.399 real 0m14.203s 00:04:34.399 user 0m14.068s 00:04:34.399 sys 0m1.134s 00:04:34.399 19:20:24 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.399 19:20:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.399 ************************************ 00:04:34.399 END TEST skip_rpc 00:04:34.399 ************************************ 00:04:34.399 19:20:24 -- common/autotest_common.sh@1142 -- # return 0 00:04:34.399 19:20:24 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:34.399 19:20:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.399 19:20:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.399 19:20:24 -- common/autotest_common.sh@10 -- # set +x 00:04:34.399 ************************************ 00:04:34.399 START TEST rpc_client 00:04:34.399 ************************************ 00:04:34.399 19:20:24 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:34.657 * Looking for test storage... 00:04:34.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:34.657 19:20:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:34.657 OK 00:04:34.657 19:20:24 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:34.657 00:04:34.657 real 0m0.100s 00:04:34.657 user 0m0.038s 00:04:34.657 sys 0m0.068s 00:04:34.657 19:20:24 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.657 19:20:24 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:34.657 ************************************ 00:04:34.657 END TEST rpc_client 00:04:34.657 ************************************ 00:04:34.657 19:20:24 -- common/autotest_common.sh@1142 -- # return 0 00:04:34.657 19:20:24 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:34.657 19:20:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.657 19:20:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.657 19:20:24 -- common/autotest_common.sh@10 -- # set +x 00:04:34.657 ************************************ 00:04:34.657 START TEST json_config 00:04:34.657 ************************************ 00:04:34.657 19:20:24 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:34.657 19:20:24 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:34.657 19:20:24 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:34.657 19:20:24 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:34.657 19:20:24 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.657 19:20:24 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.657 19:20:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.657 19:20:24 json_config -- paths/export.sh@5 -- # export PATH 00:04:34.657 19:20:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@47 -- # : 0 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:34.657 19:20:24 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:34.657 INFO: JSON configuration test init 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:34.657 19:20:24 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:34.657 19:20:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:34.657 19:20:24 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:34.657 19:20:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.657 19:20:24 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:34.657 19:20:24 json_config -- json_config/common.sh@9 -- # local app=target 00:04:34.657 19:20:24 json_config -- json_config/common.sh@10 -- # shift 00:04:34.657 19:20:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:34.657 19:20:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:34.657 19:20:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:34.657 19:20:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.657 19:20:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.657 19:20:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61280 00:04:34.657 Waiting for target to run... 00:04:34.657 19:20:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:34.657 19:20:24 json_config -- json_config/common.sh@25 -- # waitforlisten 61280 /var/tmp/spdk_tgt.sock 00:04:34.657 19:20:24 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:34.657 19:20:24 json_config -- common/autotest_common.sh@829 -- # '[' -z 61280 ']' 00:04:34.657 19:20:24 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:34.657 19:20:24 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:34.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:34.657 19:20:24 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:34.657 19:20:24 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:34.657 19:20:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.971 [2024-07-15 19:20:24.468402] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:04:34.971 [2024-07-15 19:20:24.468502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61280 ] 00:04:35.228 [2024-07-15 19:20:24.775694] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.228 [2024-07-15 19:20:24.831629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.793 19:20:25 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.793 19:20:25 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:35.793 19:20:25 json_config -- json_config/common.sh@26 -- # echo '' 00:04:35.793 00:04:35.793 19:20:25 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:35.793 19:20:25 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:35.793 19:20:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:35.793 19:20:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.793 19:20:25 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:35.793 19:20:25 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:35.793 19:20:25 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:35.793 19:20:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.793 19:20:25 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:35.793 19:20:25 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:35.793 19:20:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:36.358 19:20:25 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:36.358 19:20:25 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:36.358 19:20:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:36.358 19:20:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.358 19:20:25 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:36.358 19:20:25 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:36.358 19:20:25 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:36.358 19:20:25 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:36.358 19:20:25 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:36.358 19:20:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:36.615 19:20:26 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:36.615 19:20:26 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:36.615 19:20:26 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:36.615 19:20:26 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:36.615 19:20:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:36.615 19:20:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.615 19:20:26 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:36.615 19:20:26 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:36.615 19:20:26 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:36.615 19:20:26 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:36.615 19:20:26 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:36.615 19:20:26 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:36.615 19:20:26 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:36.615 19:20:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:36.615 19:20:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.615 19:20:26 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:36.615 19:20:26 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:36.615 19:20:26 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:36.615 19:20:26 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:36.615 19:20:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:36.889 MallocForNvmf0 00:04:36.889 19:20:26 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:36.889 19:20:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:37.147 MallocForNvmf1 00:04:37.147 19:20:26 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:37.147 19:20:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:37.405 [2024-07-15 19:20:27.139175] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:37.405 19:20:27 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:37.405 19:20:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:37.664 19:20:27 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:37.664 19:20:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:37.921 19:20:27 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:37.921 19:20:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:38.180 19:20:27 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:38.180 19:20:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:38.439 [2024-07-15 19:20:28.219764] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:38.697 19:20:28 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:38.697 19:20:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:38.697 19:20:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.697 19:20:28 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:38.697 19:20:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:38.697 19:20:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.697 19:20:28 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:38.697 19:20:28 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:38.697 19:20:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:38.957 MallocBdevForConfigChangeCheck 00:04:38.957 19:20:28 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:38.957 19:20:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:38.957 19:20:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.957 19:20:28 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:38.957 19:20:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.215 INFO: shutting down applications... 00:04:39.215 19:20:28 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:39.215 19:20:28 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:39.215 19:20:28 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:39.215 19:20:28 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:39.215 19:20:28 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:39.473 Calling clear_iscsi_subsystem 00:04:39.473 Calling clear_nvmf_subsystem 00:04:39.473 Calling clear_nbd_subsystem 00:04:39.473 Calling clear_ublk_subsystem 00:04:39.473 Calling clear_vhost_blk_subsystem 00:04:39.473 Calling clear_vhost_scsi_subsystem 00:04:39.473 Calling clear_bdev_subsystem 00:04:39.473 19:20:29 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:39.473 19:20:29 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:39.473 19:20:29 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:39.474 19:20:29 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.474 19:20:29 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:39.474 19:20:29 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:40.041 19:20:29 json_config -- json_config/json_config.sh@345 -- # break 00:04:40.041 19:20:29 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:40.041 19:20:29 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:40.041 19:20:29 json_config -- json_config/common.sh@31 -- # local app=target 00:04:40.041 19:20:29 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:40.041 19:20:29 json_config -- json_config/common.sh@35 -- # [[ -n 61280 ]] 00:04:40.041 19:20:29 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61280 00:04:40.041 19:20:29 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:40.041 19:20:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.041 19:20:29 json_config -- json_config/common.sh@41 -- # kill -0 61280 00:04:40.041 19:20:29 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:40.363 SPDK target shutdown done 00:04:40.363 19:20:30 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:40.363 19:20:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.363 19:20:30 json_config -- json_config/common.sh@41 -- # kill -0 61280 00:04:40.363 19:20:30 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:40.363 19:20:30 json_config -- json_config/common.sh@43 -- # break 00:04:40.363 19:20:30 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:40.363 19:20:30 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:40.363 INFO: relaunching applications... 00:04:40.363 19:20:30 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:40.363 19:20:30 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:40.621 19:20:30 json_config -- json_config/common.sh@9 -- # local app=target 00:04:40.621 19:20:30 json_config -- json_config/common.sh@10 -- # shift 00:04:40.621 19:20:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:40.621 19:20:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:40.621 19:20:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:40.621 19:20:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:40.621 19:20:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:40.621 19:20:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61555 00:04:40.621 Waiting for target to run... 00:04:40.621 19:20:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:40.621 19:20:30 json_config -- json_config/common.sh@25 -- # waitforlisten 61555 /var/tmp/spdk_tgt.sock 00:04:40.621 19:20:30 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:40.621 19:20:30 json_config -- common/autotest_common.sh@829 -- # '[' -z 61555 ']' 00:04:40.621 19:20:30 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:40.621 19:20:30 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.621 19:20:30 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:40.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:40.621 19:20:30 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.621 19:20:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.621 [2024-07-15 19:20:30.238646] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:04:40.622 [2024-07-15 19:20:30.239194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61555 ] 00:04:40.880 [2024-07-15 19:20:30.526538] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.880 [2024-07-15 19:20:30.582718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.138 [2024-07-15 19:20:30.896305] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:41.138 [2024-07-15 19:20:30.928387] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:41.397 19:20:31 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.656 00:04:41.656 19:20:31 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:41.656 19:20:31 json_config -- json_config/common.sh@26 -- # echo '' 00:04:41.656 19:20:31 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:41.656 19:20:31 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:41.656 INFO: Checking if target configuration is the same... 00:04:41.656 19:20:31 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:41.656 19:20:31 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:41.656 19:20:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:41.656 + '[' 2 -ne 2 ']' 00:04:41.656 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:41.656 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:41.656 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:41.656 +++ basename /dev/fd/62 00:04:41.656 ++ mktemp /tmp/62.XXX 00:04:41.656 + tmp_file_1=/tmp/62.FPc 00:04:41.656 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:41.656 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:41.657 + tmp_file_2=/tmp/spdk_tgt_config.json.ZPp 00:04:41.657 + ret=0 00:04:41.657 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:41.916 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:41.916 + diff -u /tmp/62.FPc /tmp/spdk_tgt_config.json.ZPp 00:04:41.916 + echo 'INFO: JSON config files are the same' 00:04:41.916 INFO: JSON config files are the same 00:04:41.916 + rm /tmp/62.FPc /tmp/spdk_tgt_config.json.ZPp 00:04:41.916 + exit 0 00:04:41.916 19:20:31 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:41.916 INFO: changing configuration and checking if this can be detected... 00:04:41.916 19:20:31 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:41.916 19:20:31 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:41.916 19:20:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:42.482 19:20:31 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:42.482 19:20:31 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:42.482 19:20:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:42.482 + '[' 2 -ne 2 ']' 00:04:42.482 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:42.482 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:42.482 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:42.482 +++ basename /dev/fd/62 00:04:42.483 ++ mktemp /tmp/62.XXX 00:04:42.483 + tmp_file_1=/tmp/62.Gjp 00:04:42.483 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:42.483 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:42.483 + tmp_file_2=/tmp/spdk_tgt_config.json.fGS 00:04:42.483 + ret=0 00:04:42.483 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:42.740 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:42.740 + diff -u /tmp/62.Gjp /tmp/spdk_tgt_config.json.fGS 00:04:42.740 + ret=1 00:04:42.740 + echo '=== Start of file: /tmp/62.Gjp ===' 00:04:42.740 + cat /tmp/62.Gjp 00:04:42.740 + echo '=== End of file: /tmp/62.Gjp ===' 00:04:42.740 + echo '' 00:04:42.740 + echo '=== Start of file: /tmp/spdk_tgt_config.json.fGS ===' 00:04:42.740 + cat /tmp/spdk_tgt_config.json.fGS 00:04:42.740 + echo '=== End of file: /tmp/spdk_tgt_config.json.fGS ===' 00:04:42.740 + echo '' 00:04:42.740 + rm /tmp/62.Gjp /tmp/spdk_tgt_config.json.fGS 00:04:42.740 + exit 1 00:04:42.740 INFO: configuration change detected. 00:04:42.740 19:20:32 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:42.740 19:20:32 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:42.740 19:20:32 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:42.740 19:20:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:42.740 19:20:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.740 19:20:32 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:42.740 19:20:32 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:42.740 19:20:32 json_config -- json_config/json_config.sh@317 -- # [[ -n 61555 ]] 00:04:42.740 19:20:32 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:42.740 19:20:32 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:42.740 19:20:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:42.740 19:20:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.740 19:20:32 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:42.740 19:20:32 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:42.740 19:20:32 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:42.740 19:20:32 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:42.740 19:20:32 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:42.740 19:20:32 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:42.740 19:20:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:42.740 19:20:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.740 19:20:32 json_config -- json_config/json_config.sh@323 -- # killprocess 61555 00:04:42.740 19:20:32 json_config -- common/autotest_common.sh@948 -- # '[' -z 61555 ']' 00:04:42.740 19:20:32 json_config -- common/autotest_common.sh@952 -- # kill -0 61555 00:04:42.740 19:20:32 json_config -- common/autotest_common.sh@953 -- # uname 00:04:42.740 19:20:32 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:42.740 19:20:32 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61555 00:04:42.740 killing process with pid 61555 00:04:42.740 19:20:32 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:42.740 19:20:32 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:42.740 19:20:32 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61555' 00:04:42.740 19:20:32 json_config -- common/autotest_common.sh@967 -- # kill 61555 00:04:42.740 19:20:32 json_config -- common/autotest_common.sh@972 -- # wait 61555 00:04:42.998 19:20:32 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:42.998 19:20:32 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:42.998 19:20:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:42.998 19:20:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.998 INFO: Success 00:04:42.998 19:20:32 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:42.998 19:20:32 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:42.998 ************************************ 00:04:42.998 END TEST json_config 00:04:42.998 ************************************ 00:04:42.998 00:04:42.998 real 0m8.445s 00:04:42.998 user 0m12.401s 00:04:42.998 sys 0m1.547s 00:04:42.998 19:20:32 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.998 19:20:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.998 19:20:32 -- common/autotest_common.sh@1142 -- # return 0 00:04:42.998 19:20:32 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:42.998 19:20:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.998 19:20:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.998 19:20:32 -- common/autotest_common.sh@10 -- # set +x 00:04:43.257 ************************************ 00:04:43.257 START TEST json_config_extra_key 00:04:43.257 ************************************ 00:04:43.257 19:20:32 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:43.257 19:20:32 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:43.257 19:20:32 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:43.257 19:20:32 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.257 19:20:32 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.257 19:20:32 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.257 19:20:32 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.257 19:20:32 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:43.257 19:20:32 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:43.257 19:20:32 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.257 19:20:32 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:43.257 19:20:32 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.257 19:20:32 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.257 19:20:32 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:04:43.257 19:20:32 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:04:43.257 19:20:32 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.257 19:20:32 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.257 19:20:32 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:43.257 19:20:32 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:43.257 19:20:32 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:43.257 19:20:32 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.257 19:20:32 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.258 19:20:32 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.258 19:20:32 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.258 19:20:32 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.258 19:20:32 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.258 19:20:32 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:43.258 19:20:32 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.258 19:20:32 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:43.258 19:20:32 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:43.258 19:20:32 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:43.258 19:20:32 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:43.258 19:20:32 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.258 19:20:32 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.258 19:20:32 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:43.258 19:20:32 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:43.258 19:20:32 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:43.258 19:20:32 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:43.258 19:20:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:43.258 INFO: launching applications... 00:04:43.258 Waiting for target to run... 00:04:43.258 19:20:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:43.258 19:20:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:43.258 19:20:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:43.258 19:20:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:43.258 19:20:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:43.258 19:20:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:43.258 19:20:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:43.258 19:20:32 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:43.258 19:20:32 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:43.258 19:20:32 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:43.258 19:20:32 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:43.258 19:20:32 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:43.258 19:20:32 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:43.258 19:20:32 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:43.258 19:20:32 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:43.258 19:20:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.258 19:20:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.258 19:20:32 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61725 00:04:43.258 19:20:32 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:43.258 19:20:32 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61725 /var/tmp/spdk_tgt.sock 00:04:43.258 19:20:32 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 61725 ']' 00:04:43.258 19:20:32 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:43.258 19:20:32 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:43.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:43.258 19:20:32 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.258 19:20:32 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:43.258 19:20:32 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.258 19:20:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:43.258 [2024-07-15 19:20:32.962528] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:04:43.258 [2024-07-15 19:20:32.962900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61725 ] 00:04:43.517 [2024-07-15 19:20:33.270058] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.774 [2024-07-15 19:20:33.326339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.339 19:20:33 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.339 19:20:33 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:44.339 19:20:33 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:44.339 00:04:44.339 19:20:33 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:44.339 INFO: shutting down applications... 00:04:44.339 19:20:33 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:44.339 19:20:33 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:44.339 19:20:33 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:44.339 19:20:33 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61725 ]] 00:04:44.339 19:20:33 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61725 00:04:44.339 19:20:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:44.339 19:20:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.339 19:20:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61725 00:04:44.339 19:20:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:44.905 19:20:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:44.905 19:20:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.905 19:20:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61725 00:04:44.905 19:20:34 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:44.905 19:20:34 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:44.905 SPDK target shutdown done 00:04:44.905 Success 00:04:44.905 19:20:34 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:44.905 19:20:34 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:44.905 19:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:44.905 00:04:44.905 real 0m1.682s 00:04:44.905 user 0m1.582s 00:04:44.905 sys 0m0.326s 00:04:44.905 ************************************ 00:04:44.905 END TEST json_config_extra_key 00:04:44.905 ************************************ 00:04:44.905 19:20:34 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.905 19:20:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:44.905 19:20:34 -- common/autotest_common.sh@1142 -- # return 0 00:04:44.905 19:20:34 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:44.905 19:20:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.905 19:20:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.905 19:20:34 -- common/autotest_common.sh@10 -- # set +x 00:04:44.905 ************************************ 00:04:44.905 START TEST alias_rpc 00:04:44.905 ************************************ 00:04:44.905 19:20:34 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:44.905 * Looking for test storage... 00:04:44.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:44.905 19:20:34 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:44.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.905 19:20:34 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61807 00:04:44.905 19:20:34 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61807 00:04:44.905 19:20:34 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.905 19:20:34 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 61807 ']' 00:04:44.905 19:20:34 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.905 19:20:34 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.905 19:20:34 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.905 19:20:34 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.905 19:20:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.905 [2024-07-15 19:20:34.693568] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:04:44.905 [2024-07-15 19:20:34.693891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61807 ] 00:04:45.163 [2024-07-15 19:20:34.831661] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.163 [2024-07-15 19:20:34.903114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.099 19:20:35 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.099 19:20:35 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:46.099 19:20:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:46.357 19:20:36 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61807 00:04:46.357 19:20:36 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 61807 ']' 00:04:46.357 19:20:36 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 61807 00:04:46.357 19:20:36 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:46.357 19:20:36 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:46.357 19:20:36 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61807 00:04:46.357 killing process with pid 61807 00:04:46.357 19:20:36 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:46.357 19:20:36 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:46.357 19:20:36 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61807' 00:04:46.357 19:20:36 alias_rpc -- common/autotest_common.sh@967 -- # kill 61807 00:04:46.357 19:20:36 alias_rpc -- common/autotest_common.sh@972 -- # wait 61807 00:04:46.617 ************************************ 00:04:46.617 END TEST alias_rpc 00:04:46.617 ************************************ 00:04:46.617 00:04:46.617 real 0m1.774s 00:04:46.617 user 0m2.185s 00:04:46.617 sys 0m0.352s 00:04:46.617 19:20:36 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.617 19:20:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.617 19:20:36 -- common/autotest_common.sh@1142 -- # return 0 00:04:46.617 19:20:36 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:04:46.617 19:20:36 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:46.617 19:20:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.617 19:20:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.617 19:20:36 -- common/autotest_common.sh@10 -- # set +x 00:04:46.617 ************************************ 00:04:46.617 START TEST dpdk_mem_utility 00:04:46.617 ************************************ 00:04:46.617 19:20:36 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:46.878 * Looking for test storage... 00:04:46.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:46.878 19:20:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:46.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.878 19:20:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61894 00:04:46.878 19:20:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61894 00:04:46.878 19:20:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.878 19:20:36 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 61894 ']' 00:04:46.878 19:20:36 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.878 19:20:36 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.878 19:20:36 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.878 19:20:36 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.878 19:20:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:46.878 [2024-07-15 19:20:36.522088] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:04:46.878 [2024-07-15 19:20:36.522438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61894 ] 00:04:46.878 [2024-07-15 19:20:36.657188] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.136 [2024-07-15 19:20:36.739428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.704 19:20:37 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.704 19:20:37 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:47.704 19:20:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:47.704 19:20:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:47.704 19:20:37 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.704 19:20:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:47.704 { 00:04:47.704 "filename": "/tmp/spdk_mem_dump.txt" 00:04:47.704 } 00:04:47.704 19:20:37 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.704 19:20:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:47.963 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:47.963 1 heaps totaling size 814.000000 MiB 00:04:47.963 size: 814.000000 MiB heap id: 0 00:04:47.963 end heaps---------- 00:04:47.963 8 mempools totaling size 598.116089 MiB 00:04:47.963 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:47.963 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:47.963 size: 84.521057 MiB name: bdev_io_61894 00:04:47.963 size: 51.011292 MiB name: evtpool_61894 00:04:47.963 size: 50.003479 MiB name: msgpool_61894 00:04:47.963 size: 21.763794 MiB name: PDU_Pool 00:04:47.963 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:47.963 size: 0.026123 MiB name: Session_Pool 00:04:47.963 end mempools------- 00:04:47.963 6 memzones totaling size 4.142822 MiB 00:04:47.963 size: 1.000366 MiB name: RG_ring_0_61894 00:04:47.963 size: 1.000366 MiB name: RG_ring_1_61894 00:04:47.963 size: 1.000366 MiB name: RG_ring_4_61894 00:04:47.963 size: 1.000366 MiB name: RG_ring_5_61894 00:04:47.963 size: 0.125366 MiB name: RG_ring_2_61894 00:04:47.963 size: 0.015991 MiB name: RG_ring_3_61894 00:04:47.963 end memzones------- 00:04:47.963 19:20:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:47.963 heap id: 0 total size: 814.000000 MiB number of busy elements: 215 number of free elements: 15 00:04:47.963 list of free elements. size: 12.487488 MiB 00:04:47.963 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:47.963 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:47.963 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:47.963 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:47.963 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:47.963 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:47.963 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:47.963 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:47.963 element at address: 0x200000200000 with size: 0.837036 MiB 00:04:47.963 element at address: 0x20001aa00000 with size: 0.572998 MiB 00:04:47.963 element at address: 0x20000b200000 with size: 0.489807 MiB 00:04:47.963 element at address: 0x200000800000 with size: 0.487061 MiB 00:04:47.963 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:47.963 element at address: 0x200027e00000 with size: 0.399048 MiB 00:04:47.963 element at address: 0x200003a00000 with size: 0.350769 MiB 00:04:47.963 list of standard malloc elements. size: 199.249939 MiB 00:04:47.963 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:47.964 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:47.964 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:47.964 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:47.964 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:47.964 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:47.964 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:47.964 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:47.964 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:47.964 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:47.964 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:47.964 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200027e66280 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200027e66340 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200027e6cf40 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:47.964 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:47.965 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:47.965 list of memzone associated elements. size: 602.262573 MiB 00:04:47.965 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:47.965 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:47.965 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:47.965 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:47.965 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:47.965 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61894_0 00:04:47.965 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:47.965 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61894_0 00:04:47.965 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:47.965 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61894_0 00:04:47.965 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:47.965 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:47.965 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:47.965 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:47.965 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:47.965 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61894 00:04:47.965 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:47.965 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61894 00:04:47.965 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:47.965 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61894 00:04:47.965 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:47.965 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:47.965 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:47.965 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:47.965 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:47.965 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:47.965 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:47.965 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:47.965 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:47.965 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61894 00:04:47.965 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:47.965 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61894 00:04:47.965 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:47.965 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61894 00:04:47.965 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:47.965 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61894 00:04:47.965 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:47.965 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61894 00:04:47.965 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:47.965 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:47.965 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:47.965 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:47.965 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:47.965 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:47.965 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:47.965 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61894 00:04:47.965 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:47.965 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:47.965 element at address: 0x200027e66400 with size: 0.023743 MiB 00:04:47.965 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:47.965 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:47.965 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61894 00:04:47.965 element at address: 0x200027e6c540 with size: 0.002441 MiB 00:04:47.965 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:47.965 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:04:47.965 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61894 00:04:47.965 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:47.965 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61894 00:04:47.965 element at address: 0x200027e6d000 with size: 0.000305 MiB 00:04:47.965 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:47.965 19:20:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:47.965 19:20:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61894 00:04:47.965 19:20:37 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 61894 ']' 00:04:47.965 19:20:37 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 61894 00:04:47.965 19:20:37 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:47.965 19:20:37 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.965 19:20:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61894 00:04:47.965 killing process with pid 61894 00:04:47.965 19:20:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:47.965 19:20:37 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:47.965 19:20:37 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61894' 00:04:47.965 19:20:37 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 61894 00:04:47.965 19:20:37 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 61894 00:04:48.224 ************************************ 00:04:48.224 END TEST dpdk_mem_utility 00:04:48.224 ************************************ 00:04:48.224 00:04:48.224 real 0m1.565s 00:04:48.224 user 0m1.834s 00:04:48.224 sys 0m0.315s 00:04:48.224 19:20:37 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.224 19:20:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:48.224 19:20:37 -- common/autotest_common.sh@1142 -- # return 0 00:04:48.224 19:20:37 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:48.224 19:20:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.224 19:20:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.224 19:20:37 -- common/autotest_common.sh@10 -- # set +x 00:04:48.224 ************************************ 00:04:48.224 START TEST event 00:04:48.224 ************************************ 00:04:48.224 19:20:37 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:48.482 * Looking for test storage... 00:04:48.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:48.482 19:20:38 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:48.482 19:20:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:48.482 19:20:38 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:48.482 19:20:38 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:48.482 19:20:38 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.482 19:20:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.482 ************************************ 00:04:48.482 START TEST event_perf 00:04:48.482 ************************************ 00:04:48.482 19:20:38 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:48.482 Running I/O for 1 seconds...[2024-07-15 19:20:38.097971] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:04:48.482 [2024-07-15 19:20:38.098059] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61983 ] 00:04:48.482 [2024-07-15 19:20:38.236130] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:48.740 [2024-07-15 19:20:38.308526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.740 [2024-07-15 19:20:38.308606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.740 [2024-07-15 19:20:38.309323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:48.740 [2024-07-15 19:20:38.309377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.676 Running I/O for 1 seconds... 00:04:49.676 lcore 0: 183266 00:04:49.676 lcore 1: 183266 00:04:49.676 lcore 2: 183268 00:04:49.676 lcore 3: 183264 00:04:49.676 done. 00:04:49.676 00:04:49.676 real 0m1.309s 00:04:49.676 user 0m4.131s 00:04:49.676 sys 0m0.050s 00:04:49.676 19:20:39 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.676 19:20:39 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.676 ************************************ 00:04:49.676 END TEST event_perf 00:04:49.676 ************************************ 00:04:49.676 19:20:39 event -- common/autotest_common.sh@1142 -- # return 0 00:04:49.676 19:20:39 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:49.676 19:20:39 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:49.676 19:20:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.676 19:20:39 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.676 ************************************ 00:04:49.676 START TEST event_reactor 00:04:49.676 ************************************ 00:04:49.676 19:20:39 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:49.676 [2024-07-15 19:20:39.466959] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:04:49.676 [2024-07-15 19:20:39.467613] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62022 ] 00:04:49.935 [2024-07-15 19:20:39.600215] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.935 [2024-07-15 19:20:39.659135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.310 test_start 00:04:51.310 oneshot 00:04:51.310 tick 100 00:04:51.310 tick 100 00:04:51.310 tick 250 00:04:51.310 tick 100 00:04:51.310 tick 100 00:04:51.310 tick 250 00:04:51.310 tick 100 00:04:51.310 tick 500 00:04:51.310 tick 100 00:04:51.310 tick 100 00:04:51.310 tick 250 00:04:51.310 tick 100 00:04:51.310 tick 100 00:04:51.310 test_end 00:04:51.310 00:04:51.310 real 0m1.279s 00:04:51.310 user 0m1.137s 00:04:51.310 sys 0m0.036s 00:04:51.310 19:20:40 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.310 19:20:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:51.310 ************************************ 00:04:51.310 END TEST event_reactor 00:04:51.310 ************************************ 00:04:51.310 19:20:40 event -- common/autotest_common.sh@1142 -- # return 0 00:04:51.310 19:20:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:51.310 19:20:40 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:51.310 19:20:40 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.310 19:20:40 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.310 ************************************ 00:04:51.310 START TEST event_reactor_perf 00:04:51.310 ************************************ 00:04:51.310 19:20:40 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:51.310 [2024-07-15 19:20:40.797118] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:04:51.310 [2024-07-15 19:20:40.797211] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62057 ] 00:04:51.310 [2024-07-15 19:20:40.935880] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.310 [2024-07-15 19:20:40.996836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.682 test_start 00:04:52.682 test_end 00:04:52.682 Performance: 354110 events per second 00:04:52.682 00:04:52.682 real 0m1.296s 00:04:52.682 user 0m1.150s 00:04:52.682 sys 0m0.039s 00:04:52.682 19:20:42 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.682 ************************************ 00:04:52.682 END TEST event_reactor_perf 00:04:52.682 ************************************ 00:04:52.682 19:20:42 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:52.682 19:20:42 event -- common/autotest_common.sh@1142 -- # return 0 00:04:52.682 19:20:42 event -- event/event.sh@49 -- # uname -s 00:04:52.682 19:20:42 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:52.682 19:20:42 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:52.682 19:20:42 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.682 19:20:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.682 19:20:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.682 ************************************ 00:04:52.682 START TEST event_scheduler 00:04:52.682 ************************************ 00:04:52.682 19:20:42 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:52.682 * Looking for test storage... 00:04:52.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:52.682 19:20:42 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:52.682 19:20:42 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62113 00:04:52.682 19:20:42 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:52.682 19:20:42 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.682 19:20:42 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62113 00:04:52.682 19:20:42 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 62113 ']' 00:04:52.682 19:20:42 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.682 19:20:42 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.682 19:20:42 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.682 19:20:42 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.682 19:20:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:52.682 [2024-07-15 19:20:42.287532] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:04:52.682 [2024-07-15 19:20:42.287995] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62113 ] 00:04:52.682 [2024-07-15 19:20:42.436075] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:52.941 [2024-07-15 19:20:42.501918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.941 [2024-07-15 19:20:42.502067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.941 [2024-07-15 19:20:42.502819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:52.941 [2024-07-15 19:20:42.502890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:52.941 19:20:42 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.941 19:20:42 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:52.941 19:20:42 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:52.941 19:20:42 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.941 19:20:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:52.941 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:52.941 POWER: Cannot set governor of lcore 0 to userspace 00:04:52.941 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:52.941 POWER: Cannot set governor of lcore 0 to performance 00:04:52.941 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:52.941 POWER: Cannot set governor of lcore 0 to userspace 00:04:52.941 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:52.941 POWER: Cannot set governor of lcore 0 to userspace 00:04:52.941 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:52.941 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:52.941 POWER: Unable to set Power Management Environment for lcore 0 00:04:52.941 [2024-07-15 19:20:42.544222] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:52.941 [2024-07-15 19:20:42.544238] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:52.941 [2024-07-15 19:20:42.544246] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:52.941 [2024-07-15 19:20:42.544258] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:52.941 [2024-07-15 19:20:42.544266] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:52.941 [2024-07-15 19:20:42.544273] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:52.941 19:20:42 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.941 19:20:42 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:52.941 19:20:42 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.941 19:20:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:52.941 [2024-07-15 19:20:42.606261] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:52.941 19:20:42 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.941 19:20:42 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:52.941 19:20:42 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.941 19:20:42 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.941 19:20:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:52.941 ************************************ 00:04:52.941 START TEST scheduler_create_thread 00:04:52.941 ************************************ 00:04:52.941 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:52.941 19:20:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:52.941 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.941 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.941 2 00:04:52.941 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.942 3 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.942 4 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.942 5 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.942 6 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.942 7 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.942 8 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.942 9 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.942 10 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.942 19:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.877 19:20:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.877 19:20:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:53.877 19:20:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.877 19:20:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.252 19:20:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.252 19:20:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:55.252 19:20:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:55.252 19:20:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.252 19:20:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.628 19:20:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.628 00:04:56.628 real 0m3.375s 00:04:56.628 user 0m0.022s 00:04:56.628 sys 0m0.003s 00:04:56.628 19:20:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.628 ************************************ 00:04:56.628 END TEST scheduler_create_thread 00:04:56.628 ************************************ 00:04:56.628 19:20:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.628 19:20:46 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:56.628 19:20:46 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:56.628 19:20:46 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62113 00:04:56.628 19:20:46 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 62113 ']' 00:04:56.628 19:20:46 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 62113 00:04:56.628 19:20:46 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:56.628 19:20:46 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:56.628 19:20:46 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62113 00:04:56.628 19:20:46 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:56.628 19:20:46 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:56.628 killing process with pid 62113 00:04:56.628 19:20:46 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62113' 00:04:56.628 19:20:46 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 62113 00:04:56.628 19:20:46 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 62113 00:04:56.628 [2024-07-15 19:20:46.374876] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:56.885 00:04:56.885 real 0m4.460s 00:04:56.885 user 0m7.673s 00:04:56.885 sys 0m0.272s 00:04:56.885 19:20:46 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.885 19:20:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.885 ************************************ 00:04:56.885 END TEST event_scheduler 00:04:56.885 ************************************ 00:04:56.885 19:20:46 event -- common/autotest_common.sh@1142 -- # return 0 00:04:56.885 19:20:46 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:56.885 19:20:46 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:56.885 19:20:46 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.885 19:20:46 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.885 19:20:46 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.885 ************************************ 00:04:56.885 START TEST app_repeat 00:04:56.885 ************************************ 00:04:56.885 19:20:46 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:56.885 19:20:46 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.885 19:20:46 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.885 19:20:46 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:56.885 19:20:46 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.885 19:20:46 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:56.885 19:20:46 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:56.885 19:20:46 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:56.885 19:20:46 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62223 00:04:56.885 19:20:46 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:56.885 19:20:46 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.885 Process app_repeat pid: 62223 00:04:56.885 19:20:46 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62223' 00:04:56.885 19:20:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:56.885 spdk_app_start Round 0 00:04:56.885 19:20:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:56.885 19:20:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62223 /var/tmp/spdk-nbd.sock 00:04:56.885 19:20:46 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62223 ']' 00:04:56.885 19:20:46 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.885 19:20:46 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.885 19:20:46 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.885 19:20:46 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.885 19:20:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.885 [2024-07-15 19:20:46.669903] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:04:56.886 [2024-07-15 19:20:46.670003] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62223 ] 00:04:57.143 [2024-07-15 19:20:46.808104] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.143 [2024-07-15 19:20:46.879451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.143 [2024-07-15 19:20:46.879468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.116 19:20:47 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.116 19:20:47 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:58.116 19:20:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.116 Malloc0 00:04:58.116 19:20:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.373 Malloc1 00:04:58.630 19:20:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.630 19:20:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.630 19:20:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.630 19:20:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:58.630 19:20:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.630 19:20:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:58.630 19:20:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.630 19:20:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.630 19:20:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.630 19:20:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:58.630 19:20:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.630 19:20:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:58.630 19:20:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:58.630 19:20:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:58.630 19:20:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.630 19:20:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:58.888 /dev/nbd0 00:04:58.888 19:20:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:58.888 19:20:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:58.888 19:20:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:58.888 19:20:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:58.888 19:20:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:58.888 19:20:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:58.888 19:20:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:58.888 19:20:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:58.888 19:20:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:58.888 19:20:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:58.888 19:20:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.888 1+0 records in 00:04:58.888 1+0 records out 00:04:58.888 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314102 s, 13.0 MB/s 00:04:58.888 19:20:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.888 19:20:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:58.888 19:20:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.888 19:20:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:58.888 19:20:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:58.888 19:20:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.888 19:20:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.888 19:20:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.146 /dev/nbd1 00:04:59.146 19:20:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.146 19:20:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.146 19:20:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:59.146 19:20:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:59.146 19:20:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.146 19:20:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.146 19:20:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:59.146 19:20:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:59.146 19:20:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.146 19:20:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.146 19:20:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.146 1+0 records in 00:04:59.146 1+0 records out 00:04:59.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325572 s, 12.6 MB/s 00:04:59.146 19:20:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.146 19:20:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:59.146 19:20:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.146 19:20:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.146 19:20:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:59.146 19:20:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.146 19:20:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.146 19:20:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.146 19:20:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.146 19:20:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.403 { 00:04:59.403 "bdev_name": "Malloc0", 00:04:59.403 "nbd_device": "/dev/nbd0" 00:04:59.403 }, 00:04:59.403 { 00:04:59.403 "bdev_name": "Malloc1", 00:04:59.403 "nbd_device": "/dev/nbd1" 00:04:59.403 } 00:04:59.403 ]' 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.403 { 00:04:59.403 "bdev_name": "Malloc0", 00:04:59.403 "nbd_device": "/dev/nbd0" 00:04:59.403 }, 00:04:59.403 { 00:04:59.403 "bdev_name": "Malloc1", 00:04:59.403 "nbd_device": "/dev/nbd1" 00:04:59.403 } 00:04:59.403 ]' 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.403 /dev/nbd1' 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.403 /dev/nbd1' 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:59.403 256+0 records in 00:04:59.403 256+0 records out 00:04:59.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00945433 s, 111 MB/s 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:59.403 256+0 records in 00:04:59.403 256+0 records out 00:04:59.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310669 s, 33.8 MB/s 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.403 19:20:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.660 256+0 records in 00:04:59.660 256+0 records out 00:04:59.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029266 s, 35.8 MB/s 00:04:59.660 19:20:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.660 19:20:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.660 19:20:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.661 19:20:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.661 19:20:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.661 19:20:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.661 19:20:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.661 19:20:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.661 19:20:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.661 19:20:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.661 19:20:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.661 19:20:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.661 19:20:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.661 19:20:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.661 19:20:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.661 19:20:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.661 19:20:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:59.661 19:20:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.661 19:20:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:59.919 19:20:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.919 19:20:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.919 19:20:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.919 19:20:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.919 19:20:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.919 19:20:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.919 19:20:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.919 19:20:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.919 19:20:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.919 19:20:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:00.177 19:20:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:00.177 19:20:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:00.177 19:20:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:00.177 19:20:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.177 19:20:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.177 19:20:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:00.177 19:20:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.177 19:20:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.177 19:20:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.177 19:20:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.177 19:20:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.436 19:20:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:00.436 19:20:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:00.436 19:20:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.436 19:20:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.436 19:20:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.436 19:20:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.436 19:20:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:00.436 19:20:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.436 19:20:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.436 19:20:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.436 19:20:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.436 19:20:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.436 19:20:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:00.694 19:20:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:00.952 [2024-07-15 19:20:50.563488] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.953 [2024-07-15 19:20:50.621379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.953 [2024-07-15 19:20:50.621383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.953 [2024-07-15 19:20:50.651097] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:00.953 [2024-07-15 19:20:50.651151] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.263 19:20:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:04.263 spdk_app_start Round 1 00:05:04.263 19:20:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:04.263 19:20:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62223 /var/tmp/spdk-nbd.sock 00:05:04.263 19:20:53 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62223 ']' 00:05:04.263 19:20:53 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.263 19:20:53 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.263 19:20:53 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.263 19:20:53 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.263 19:20:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.263 19:20:53 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.263 19:20:53 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:04.263 19:20:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.263 Malloc0 00:05:04.263 19:20:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.520 Malloc1 00:05:04.520 19:20:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.520 19:20:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.520 19:20:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.520 19:20:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.520 19:20:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.520 19:20:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.520 19:20:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.520 19:20:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.520 19:20:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.520 19:20:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.520 19:20:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.520 19:20:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.520 19:20:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.520 19:20:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.520 19:20:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.520 19:20:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.778 /dev/nbd0 00:05:04.778 19:20:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.778 19:20:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.778 19:20:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:04.778 19:20:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:04.778 19:20:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:04.778 19:20:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:04.778 19:20:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:04.778 19:20:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:04.778 19:20:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:04.778 19:20:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:04.778 19:20:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.778 1+0 records in 00:05:04.778 1+0 records out 00:05:04.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253591 s, 16.2 MB/s 00:05:04.778 19:20:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.778 19:20:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:04.778 19:20:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.778 19:20:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:04.778 19:20:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:04.778 19:20:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.778 19:20:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.778 19:20:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.036 /dev/nbd1 00:05:05.036 19:20:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.293 19:20:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.293 19:20:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:05.293 19:20:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:05.293 19:20:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:05.293 19:20:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:05.293 19:20:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:05.293 19:20:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:05.293 19:20:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:05.293 19:20:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:05.293 19:20:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.293 1+0 records in 00:05:05.293 1+0 records out 00:05:05.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319098 s, 12.8 MB/s 00:05:05.293 19:20:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.293 19:20:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:05.293 19:20:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.293 19:20:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:05.293 19:20:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:05.293 19:20:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.293 19:20:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.293 19:20:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.293 19:20:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.293 19:20:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.551 { 00:05:05.551 "bdev_name": "Malloc0", 00:05:05.551 "nbd_device": "/dev/nbd0" 00:05:05.551 }, 00:05:05.551 { 00:05:05.551 "bdev_name": "Malloc1", 00:05:05.551 "nbd_device": "/dev/nbd1" 00:05:05.551 } 00:05:05.551 ]' 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.551 { 00:05:05.551 "bdev_name": "Malloc0", 00:05:05.551 "nbd_device": "/dev/nbd0" 00:05:05.551 }, 00:05:05.551 { 00:05:05.551 "bdev_name": "Malloc1", 00:05:05.551 "nbd_device": "/dev/nbd1" 00:05:05.551 } 00:05:05.551 ]' 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.551 /dev/nbd1' 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.551 /dev/nbd1' 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.551 256+0 records in 00:05:05.551 256+0 records out 00:05:05.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00653814 s, 160 MB/s 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.551 256+0 records in 00:05:05.551 256+0 records out 00:05:05.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256106 s, 40.9 MB/s 00:05:05.551 19:20:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.552 256+0 records in 00:05:05.552 256+0 records out 00:05:05.552 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279817 s, 37.5 MB/s 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.552 19:20:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.809 19:20:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.809 19:20:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.809 19:20:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.809 19:20:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.809 19:20:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.809 19:20:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.809 19:20:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.809 19:20:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.810 19:20:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.810 19:20:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.067 19:20:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.067 19:20:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.067 19:20:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.067 19:20:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.067 19:20:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.067 19:20:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.067 19:20:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.067 19:20:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.067 19:20:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.067 19:20:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.325 19:20:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.583 19:20:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.583 19:20:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.583 19:20:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.583 19:20:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.583 19:20:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.583 19:20:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.583 19:20:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:06.583 19:20:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.583 19:20:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.583 19:20:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.583 19:20:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.583 19:20:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.583 19:20:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.842 19:20:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:06.842 [2024-07-15 19:20:56.596895] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.099 [2024-07-15 19:20:56.655658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.099 [2024-07-15 19:20:56.655673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.099 [2024-07-15 19:20:56.688297] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.099 [2024-07-15 19:20:56.688348] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:10.380 19:20:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:10.380 19:20:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:10.380 spdk_app_start Round 2 00:05:10.380 19:20:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62223 /var/tmp/spdk-nbd.sock 00:05:10.380 19:20:59 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62223 ']' 00:05:10.380 19:20:59 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.380 19:20:59 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.380 19:20:59 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.380 19:20:59 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.380 19:20:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.380 19:20:59 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.380 19:20:59 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:10.380 19:20:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.380 Malloc0 00:05:10.380 19:21:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.642 Malloc1 00:05:10.642 19:21:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.642 19:21:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.642 19:21:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.642 19:21:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.642 19:21:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.642 19:21:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.642 19:21:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.642 19:21:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.642 19:21:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.642 19:21:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.642 19:21:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.642 19:21:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.642 19:21:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:10.642 19:21:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.642 19:21:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.642 19:21:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.901 /dev/nbd0 00:05:10.901 19:21:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.901 19:21:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.901 19:21:00 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:10.901 19:21:00 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:10.901 19:21:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:10.901 19:21:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:10.901 19:21:00 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:10.901 19:21:00 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:10.901 19:21:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:10.901 19:21:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:10.901 19:21:00 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.901 1+0 records in 00:05:10.901 1+0 records out 00:05:10.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473269 s, 8.7 MB/s 00:05:10.901 19:21:00 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.901 19:21:00 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:10.901 19:21:00 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.901 19:21:00 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:10.901 19:21:00 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:10.901 19:21:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.901 19:21:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.901 19:21:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.160 /dev/nbd1 00:05:11.160 19:21:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.160 19:21:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.160 19:21:00 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:11.160 19:21:00 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:11.160 19:21:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:11.160 19:21:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:11.160 19:21:00 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:11.160 19:21:00 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:11.160 19:21:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:11.160 19:21:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:11.160 19:21:00 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.160 1+0 records in 00:05:11.160 1+0 records out 00:05:11.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031732 s, 12.9 MB/s 00:05:11.160 19:21:00 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.160 19:21:00 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:11.160 19:21:00 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.160 19:21:00 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:11.160 19:21:00 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:11.160 19:21:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.160 19:21:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.160 19:21:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.160 19:21:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.160 19:21:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.419 19:21:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:11.419 { 00:05:11.419 "bdev_name": "Malloc0", 00:05:11.419 "nbd_device": "/dev/nbd0" 00:05:11.419 }, 00:05:11.419 { 00:05:11.419 "bdev_name": "Malloc1", 00:05:11.419 "nbd_device": "/dev/nbd1" 00:05:11.419 } 00:05:11.419 ]' 00:05:11.419 19:21:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.419 { 00:05:11.419 "bdev_name": "Malloc0", 00:05:11.419 "nbd_device": "/dev/nbd0" 00:05:11.419 }, 00:05:11.419 { 00:05:11.419 "bdev_name": "Malloc1", 00:05:11.419 "nbd_device": "/dev/nbd1" 00:05:11.419 } 00:05:11.419 ]' 00:05:11.419 19:21:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.419 19:21:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.419 /dev/nbd1' 00:05:11.419 19:21:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.419 /dev/nbd1' 00:05:11.419 19:21:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.419 19:21:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.419 19:21:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.419 19:21:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.419 19:21:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.419 19:21:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.419 19:21:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.419 19:21:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.419 19:21:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.419 19:21:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.419 19:21:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.419 19:21:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.419 256+0 records in 00:05:11.419 256+0 records out 00:05:11.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00742733 s, 141 MB/s 00:05:11.419 19:21:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.419 19:21:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.678 256+0 records in 00:05:11.678 256+0 records out 00:05:11.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259542 s, 40.4 MB/s 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.678 256+0 records in 00:05:11.678 256+0 records out 00:05:11.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277902 s, 37.7 MB/s 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.678 19:21:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.936 19:21:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.936 19:21:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.936 19:21:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.936 19:21:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.936 19:21:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.936 19:21:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.936 19:21:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.936 19:21:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.936 19:21:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.936 19:21:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.196 19:21:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.196 19:21:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.196 19:21:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.196 19:21:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.196 19:21:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.196 19:21:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.196 19:21:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.196 19:21:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.196 19:21:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.196 19:21:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.196 19:21:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.454 19:21:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.454 19:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.454 19:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.454 19:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.454 19:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.454 19:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.454 19:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:12.454 19:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.454 19:21:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.454 19:21:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.454 19:21:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.454 19:21:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.454 19:21:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:12.713 19:21:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:12.971 [2024-07-15 19:21:02.627344] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.971 [2024-07-15 19:21:02.687716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.971 [2024-07-15 19:21:02.687724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.971 [2024-07-15 19:21:02.719557] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:12.971 [2024-07-15 19:21:02.719614] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.279 19:21:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62223 /var/tmp/spdk-nbd.sock 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62223 ']' 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:16.279 19:21:05 event.app_repeat -- event/event.sh@39 -- # killprocess 62223 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 62223 ']' 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 62223 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62223 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.279 killing process with pid 62223 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62223' 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@967 -- # kill 62223 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@972 -- # wait 62223 00:05:16.279 spdk_app_start is called in Round 0. 00:05:16.279 Shutdown signal received, stop current app iteration 00:05:16.279 Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 reinitialization... 00:05:16.279 spdk_app_start is called in Round 1. 00:05:16.279 Shutdown signal received, stop current app iteration 00:05:16.279 Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 reinitialization... 00:05:16.279 spdk_app_start is called in Round 2. 00:05:16.279 Shutdown signal received, stop current app iteration 00:05:16.279 Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 reinitialization... 00:05:16.279 spdk_app_start is called in Round 3. 00:05:16.279 Shutdown signal received, stop current app iteration 00:05:16.279 19:21:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:16.279 19:21:05 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:16.279 00:05:16.279 real 0m19.295s 00:05:16.279 user 0m43.904s 00:05:16.279 sys 0m2.795s 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.279 19:21:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.279 ************************************ 00:05:16.279 END TEST app_repeat 00:05:16.279 ************************************ 00:05:16.279 19:21:05 event -- common/autotest_common.sh@1142 -- # return 0 00:05:16.279 19:21:05 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:16.279 19:21:05 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:16.279 19:21:05 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.279 19:21:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.279 19:21:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.279 ************************************ 00:05:16.279 START TEST cpu_locks 00:05:16.279 ************************************ 00:05:16.279 19:21:05 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:16.279 * Looking for test storage... 00:05:16.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:16.279 19:21:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:16.279 19:21:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:16.279 19:21:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:16.279 19:21:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:16.279 19:21:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.279 19:21:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.279 19:21:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.538 ************************************ 00:05:16.538 START TEST default_locks 00:05:16.539 ************************************ 00:05:16.539 19:21:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:16.539 19:21:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62854 00:05:16.539 19:21:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62854 00:05:16.539 19:21:06 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62854 ']' 00:05:16.539 19:21:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.539 19:21:06 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.539 19:21:06 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.539 19:21:06 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.539 19:21:06 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.539 19:21:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.539 [2024-07-15 19:21:06.151757] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:16.539 [2024-07-15 19:21:06.151847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62854 ] 00:05:16.539 [2024-07-15 19:21:06.287731] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.797 [2024-07-15 19:21:06.358405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.363 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.363 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:17.363 19:21:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62854 00:05:17.363 19:21:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62854 00:05:17.363 19:21:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.928 19:21:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62854 00:05:17.928 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 62854 ']' 00:05:17.928 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 62854 00:05:17.928 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:17.928 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.928 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62854 00:05:17.928 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.928 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.928 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62854' 00:05:17.928 killing process with pid 62854 00:05:17.928 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 62854 00:05:17.928 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 62854 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62854 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62854 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:18.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 62854 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62854 ']' 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.186 ERROR: process (pid: 62854) is no longer running 00:05:18.186 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62854) - No such process 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:18.186 00:05:18.186 real 0m1.768s 00:05:18.186 user 0m2.007s 00:05:18.186 sys 0m0.481s 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.186 19:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.186 ************************************ 00:05:18.186 END TEST default_locks 00:05:18.186 ************************************ 00:05:18.186 19:21:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:18.186 19:21:07 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:18.186 19:21:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.186 19:21:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.186 19:21:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.186 ************************************ 00:05:18.186 START TEST default_locks_via_rpc 00:05:18.186 ************************************ 00:05:18.186 19:21:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:18.186 19:21:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62918 00:05:18.186 19:21:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62918 00:05:18.186 19:21:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.186 19:21:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62918 ']' 00:05:18.186 19:21:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.186 19:21:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.186 19:21:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.186 19:21:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.186 19:21:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.186 [2024-07-15 19:21:07.980308] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:18.186 [2024-07-15 19:21:07.980447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62918 ] 00:05:18.444 [2024-07-15 19:21:08.120603] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.444 [2024-07-15 19:21:08.209535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.376 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.376 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:19.376 19:21:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:19.376 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.376 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.376 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.376 19:21:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:19.376 19:21:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:19.376 19:21:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:19.376 19:21:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:19.376 19:21:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:19.376 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.376 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.376 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.376 19:21:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62918 00:05:19.376 19:21:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62918 00:05:19.376 19:21:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.941 19:21:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62918 00:05:19.941 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 62918 ']' 00:05:19.941 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 62918 00:05:19.941 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:19.941 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.941 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62918 00:05:19.941 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.941 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.941 killing process with pid 62918 00:05:19.941 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62918' 00:05:19.941 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 62918 00:05:19.941 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 62918 00:05:20.198 00:05:20.198 real 0m1.889s 00:05:20.198 user 0m2.229s 00:05:20.198 sys 0m0.515s 00:05:20.198 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.198 19:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.198 ************************************ 00:05:20.198 END TEST default_locks_via_rpc 00:05:20.198 ************************************ 00:05:20.198 19:21:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:20.198 19:21:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:20.198 19:21:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.198 19:21:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.199 19:21:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.199 ************************************ 00:05:20.199 START TEST non_locking_app_on_locked_coremask 00:05:20.199 ************************************ 00:05:20.199 19:21:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:20.199 19:21:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62976 00:05:20.199 19:21:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.199 19:21:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 62976 /var/tmp/spdk.sock 00:05:20.199 19:21:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62976 ']' 00:05:20.199 19:21:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.199 19:21:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.199 19:21:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.199 19:21:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.199 19:21:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.199 [2024-07-15 19:21:09.894465] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:20.199 [2024-07-15 19:21:09.894541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62976 ] 00:05:20.457 [2024-07-15 19:21:10.030442] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.457 [2024-07-15 19:21:10.101252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.716 19:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.716 19:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:20.716 19:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62996 00:05:20.716 19:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 62996 /var/tmp/spdk2.sock 00:05:20.716 19:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:20.716 19:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62996 ']' 00:05:20.716 19:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.716 19:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.716 19:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.716 19:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.716 19:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.716 [2024-07-15 19:21:10.344209] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:20.716 [2024-07-15 19:21:10.344304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62996 ] 00:05:20.716 [2024-07-15 19:21:10.490990] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:20.716 [2024-07-15 19:21:10.491054] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.976 [2024-07-15 19:21:10.608240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.910 19:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.910 19:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:21.910 19:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 62976 00:05:21.910 19:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62976 00:05:21.910 19:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.477 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 62976 00:05:22.477 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62976 ']' 00:05:22.477 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 62976 00:05:22.477 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:22.477 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.477 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62976 00:05:22.477 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.477 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.477 killing process with pid 62976 00:05:22.477 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62976' 00:05:22.477 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 62976 00:05:22.477 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 62976 00:05:23.046 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 62996 00:05:23.046 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62996 ']' 00:05:23.046 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 62996 00:05:23.046 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:23.046 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.046 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62996 00:05:23.046 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.046 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.046 killing process with pid 62996 00:05:23.046 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62996' 00:05:23.046 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 62996 00:05:23.046 19:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 62996 00:05:23.304 00:05:23.304 real 0m3.219s 00:05:23.304 user 0m3.776s 00:05:23.304 sys 0m0.888s 00:05:23.304 19:21:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.304 ************************************ 00:05:23.304 END TEST non_locking_app_on_locked_coremask 00:05:23.304 ************************************ 00:05:23.304 19:21:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.304 19:21:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:23.304 19:21:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:23.304 19:21:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.304 19:21:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.304 19:21:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.582 ************************************ 00:05:23.582 START TEST locking_app_on_unlocked_coremask 00:05:23.582 ************************************ 00:05:23.582 19:21:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:23.582 19:21:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63075 00:05:23.582 19:21:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:23.582 19:21:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63075 /var/tmp/spdk.sock 00:05:23.582 19:21:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63075 ']' 00:05:23.582 19:21:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.582 19:21:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.582 19:21:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.582 19:21:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.582 19:21:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.582 [2024-07-15 19:21:13.175023] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:23.582 [2024-07-15 19:21:13.175879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63075 ] 00:05:23.582 [2024-07-15 19:21:13.312446] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.582 [2024-07-15 19:21:13.312507] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.847 [2024-07-15 19:21:13.381652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.414 19:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.414 19:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:24.414 19:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63103 00:05:24.414 19:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 63103 /var/tmp/spdk2.sock 00:05:24.414 19:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:24.414 19:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63103 ']' 00:05:24.414 19:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.414 19:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.414 19:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.414 19:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.414 19:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.672 [2024-07-15 19:21:14.282186] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:24.673 [2024-07-15 19:21:14.282297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63103 ] 00:05:24.673 [2024-07-15 19:21:14.427445] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.932 [2024-07-15 19:21:14.544988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.500 19:21:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.500 19:21:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:25.500 19:21:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 63103 00:05:25.500 19:21:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63103 00:05:25.500 19:21:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.435 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63075 00:05:26.435 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63075 ']' 00:05:26.435 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63075 00:05:26.435 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:26.435 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.435 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63075 00:05:26.435 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.435 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.435 killing process with pid 63075 00:05:26.435 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63075' 00:05:26.435 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63075 00:05:26.436 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63075 00:05:27.003 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 63103 00:05:27.003 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63103 ']' 00:05:27.003 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63103 00:05:27.003 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:27.003 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.003 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63103 00:05:27.003 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.003 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.003 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63103' 00:05:27.003 killing process with pid 63103 00:05:27.003 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63103 00:05:27.003 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63103 00:05:27.262 00:05:27.262 real 0m3.736s 00:05:27.262 user 0m4.516s 00:05:27.262 sys 0m0.887s 00:05:27.262 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.262 19:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.262 ************************************ 00:05:27.262 END TEST locking_app_on_unlocked_coremask 00:05:27.262 ************************************ 00:05:27.262 19:21:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:27.262 19:21:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:27.262 19:21:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.262 19:21:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.262 19:21:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.262 ************************************ 00:05:27.262 START TEST locking_app_on_locked_coremask 00:05:27.262 ************************************ 00:05:27.262 19:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:27.262 19:21:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63171 00:05:27.262 19:21:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63171 /var/tmp/spdk.sock 00:05:27.262 19:21:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.262 19:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63171 ']' 00:05:27.262 19:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.262 19:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.262 19:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.262 19:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.262 19:21:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.262 [2024-07-15 19:21:16.961707] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:27.262 [2024-07-15 19:21:16.961815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63171 ] 00:05:27.520 [2024-07-15 19:21:17.102087] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.520 [2024-07-15 19:21:17.190874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.456 19:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.456 19:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:28.456 19:21:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63199 00:05:28.456 19:21:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:28.456 19:21:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63199 /var/tmp/spdk2.sock 00:05:28.456 19:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:28.456 19:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63199 /var/tmp/spdk2.sock 00:05:28.456 19:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:28.456 19:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.456 19:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:28.456 19:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.456 19:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63199 /var/tmp/spdk2.sock 00:05:28.456 19:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63199 ']' 00:05:28.456 19:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.456 19:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.456 19:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.456 19:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.456 19:21:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.456 [2024-07-15 19:21:17.986623] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:28.456 [2024-07-15 19:21:17.986717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63199 ] 00:05:28.456 [2024-07-15 19:21:18.133811] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63171 has claimed it. 00:05:28.456 [2024-07-15 19:21:18.133887] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:29.023 ERROR: process (pid: 63199) is no longer running 00:05:29.023 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63199) - No such process 00:05:29.023 19:21:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.023 19:21:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:29.023 19:21:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:29.023 19:21:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:29.023 19:21:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:29.023 19:21:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:29.023 19:21:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63171 00:05:29.023 19:21:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63171 00:05:29.023 19:21:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.590 19:21:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63171 00:05:29.590 19:21:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63171 ']' 00:05:29.590 19:21:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63171 00:05:29.590 19:21:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:29.590 19:21:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:29.590 19:21:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63171 00:05:29.590 19:21:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:29.590 19:21:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:29.591 killing process with pid 63171 00:05:29.591 19:21:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63171' 00:05:29.591 19:21:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63171 00:05:29.591 19:21:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63171 00:05:29.849 00:05:29.849 real 0m2.594s 00:05:29.849 user 0m3.165s 00:05:29.849 sys 0m0.580s 00:05:29.849 19:21:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.849 19:21:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.849 ************************************ 00:05:29.849 END TEST locking_app_on_locked_coremask 00:05:29.849 ************************************ 00:05:29.849 19:21:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:29.849 19:21:19 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:29.849 19:21:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.849 19:21:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.849 19:21:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.849 ************************************ 00:05:29.849 START TEST locking_overlapped_coremask 00:05:29.849 ************************************ 00:05:29.849 19:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:29.849 19:21:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63256 00:05:29.849 19:21:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63256 /var/tmp/spdk.sock 00:05:29.849 19:21:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:29.849 19:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63256 ']' 00:05:29.849 19:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.849 19:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.849 19:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.849 19:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.849 19:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.849 [2024-07-15 19:21:19.603784] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:29.850 [2024-07-15 19:21:19.603892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63256 ] 00:05:30.108 [2024-07-15 19:21:19.741834] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.108 [2024-07-15 19:21:19.812713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.108 [2024-07-15 19:21:19.812869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.108 [2024-07-15 19:21:19.812875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.043 19:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.043 19:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:31.043 19:21:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63286 00:05:31.043 19:21:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63286 /var/tmp/spdk2.sock 00:05:31.043 19:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:31.043 19:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63286 /var/tmp/spdk2.sock 00:05:31.043 19:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:31.043 19:21:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:31.043 19:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.043 19:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:31.043 19:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.043 19:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63286 /var/tmp/spdk2.sock 00:05:31.043 19:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63286 ']' 00:05:31.043 19:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.043 19:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.043 19:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.043 19:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.043 19:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.043 [2024-07-15 19:21:20.707042] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:31.043 [2024-07-15 19:21:20.707149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63286 ] 00:05:31.301 [2024-07-15 19:21:20.853411] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63256 has claimed it. 00:05:31.301 [2024-07-15 19:21:20.853488] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:31.866 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63286) - No such process 00:05:31.866 ERROR: process (pid: 63286) is no longer running 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63256 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 63256 ']' 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 63256 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63256 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.866 killing process with pid 63256 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63256' 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 63256 00:05:31.866 19:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 63256 00:05:32.124 00:05:32.124 real 0m2.171s 00:05:32.124 user 0m6.295s 00:05:32.124 sys 0m0.340s 00:05:32.124 19:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.124 19:21:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.124 ************************************ 00:05:32.124 END TEST locking_overlapped_coremask 00:05:32.124 ************************************ 00:05:32.124 19:21:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:32.124 19:21:21 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:32.124 19:21:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.124 19:21:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.124 19:21:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.124 ************************************ 00:05:32.124 START TEST locking_overlapped_coremask_via_rpc 00:05:32.124 ************************************ 00:05:32.124 19:21:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:32.124 19:21:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63332 00:05:32.124 19:21:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63332 /var/tmp/spdk.sock 00:05:32.124 19:21:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63332 ']' 00:05:32.124 19:21:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.124 19:21:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:32.124 19:21:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.124 19:21:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.124 19:21:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.124 19:21:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.124 [2024-07-15 19:21:21.817926] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:32.124 [2024-07-15 19:21:21.818014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63332 ] 00:05:32.382 [2024-07-15 19:21:21.954891] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.382 [2024-07-15 19:21:21.954950] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.382 [2024-07-15 19:21:22.027788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.382 [2024-07-15 19:21:22.027839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.382 [2024-07-15 19:21:22.027846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.351 19:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.351 19:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:33.351 19:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63363 00:05:33.351 19:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63363 /var/tmp/spdk2.sock 00:05:33.351 19:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:33.351 19:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63363 ']' 00:05:33.351 19:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.351 19:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.351 19:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.351 19:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.351 19:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.351 [2024-07-15 19:21:22.926316] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:33.351 [2024-07-15 19:21:22.926421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63363 ] 00:05:33.351 [2024-07-15 19:21:23.072683] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.351 [2024-07-15 19:21:23.072729] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.610 [2024-07-15 19:21:23.192574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.610 [2024-07-15 19:21:23.192655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.610 [2024-07-15 19:21:23.192656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:34.175 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.175 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:34.175 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:34.175 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.175 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.175 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.175 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.176 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:34.176 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.176 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:34.176 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.176 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:34.176 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.176 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.176 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.176 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.176 [2024-07-15 19:21:23.967514] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63332 has claimed it. 00:05:34.176 2024/07/15 19:21:23 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:34.176 request: 00:05:34.176 { 00:05:34.176 "method": "framework_enable_cpumask_locks", 00:05:34.176 "params": {} 00:05:34.176 } 00:05:34.432 Got JSON-RPC error response 00:05:34.432 GoRPCClient: error on JSON-RPC call 00:05:34.432 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:34.433 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:34.433 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:34.433 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:34.433 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:34.433 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63332 /var/tmp/spdk.sock 00:05:34.433 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63332 ']' 00:05:34.433 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.433 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.433 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.433 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.433 19:21:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.689 19:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.689 19:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:34.689 19:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63363 /var/tmp/spdk2.sock 00:05:34.689 19:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63363 ']' 00:05:34.689 19:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.689 19:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.689 19:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.689 19:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.689 19:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.946 19:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.946 19:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:34.946 19:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:34.947 19:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:34.947 19:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:34.947 ************************************ 00:05:34.947 END TEST locking_overlapped_coremask_via_rpc 00:05:34.947 ************************************ 00:05:34.947 19:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:34.947 00:05:34.947 real 0m2.855s 00:05:34.947 user 0m1.550s 00:05:34.947 sys 0m0.222s 00:05:34.947 19:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.947 19:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.947 19:21:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:34.947 19:21:24 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:34.947 19:21:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63332 ]] 00:05:34.947 19:21:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63332 00:05:34.947 19:21:24 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63332 ']' 00:05:34.947 19:21:24 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63332 00:05:34.947 19:21:24 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:34.947 19:21:24 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.947 19:21:24 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63332 00:05:34.947 19:21:24 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:34.947 19:21:24 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:34.947 19:21:24 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63332' 00:05:34.947 killing process with pid 63332 00:05:34.947 19:21:24 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63332 00:05:34.947 19:21:24 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63332 00:05:35.204 19:21:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63363 ]] 00:05:35.204 19:21:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63363 00:05:35.204 19:21:24 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63363 ']' 00:05:35.204 19:21:24 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63363 00:05:35.204 19:21:24 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:35.204 19:21:24 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.204 19:21:24 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63363 00:05:35.204 killing process with pid 63363 00:05:35.204 19:21:24 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:35.204 19:21:24 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:35.204 19:21:24 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63363' 00:05:35.204 19:21:24 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63363 00:05:35.204 19:21:24 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63363 00:05:35.463 19:21:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:35.463 19:21:25 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:35.463 Process with pid 63332 is not found 00:05:35.463 Process with pid 63363 is not found 00:05:35.463 19:21:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63332 ]] 00:05:35.463 19:21:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63332 00:05:35.463 19:21:25 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63332 ']' 00:05:35.463 19:21:25 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63332 00:05:35.463 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63332) - No such process 00:05:35.463 19:21:25 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63332 is not found' 00:05:35.463 19:21:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63363 ]] 00:05:35.463 19:21:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63363 00:05:35.463 19:21:25 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63363 ']' 00:05:35.463 19:21:25 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63363 00:05:35.463 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63363) - No such process 00:05:35.463 19:21:25 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63363 is not found' 00:05:35.463 19:21:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:35.463 00:05:35.463 real 0m19.238s 00:05:35.463 user 0m36.294s 00:05:35.463 sys 0m4.534s 00:05:35.463 19:21:25 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.463 19:21:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.463 ************************************ 00:05:35.463 END TEST cpu_locks 00:05:35.463 ************************************ 00:05:35.722 19:21:25 event -- common/autotest_common.sh@1142 -- # return 0 00:05:35.722 00:05:35.722 real 0m47.289s 00:05:35.722 user 1m34.427s 00:05:35.722 sys 0m7.970s 00:05:35.722 19:21:25 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.722 19:21:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.722 ************************************ 00:05:35.722 END TEST event 00:05:35.722 ************************************ 00:05:35.722 19:21:25 -- common/autotest_common.sh@1142 -- # return 0 00:05:35.722 19:21:25 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:35.722 19:21:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.722 19:21:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.722 19:21:25 -- common/autotest_common.sh@10 -- # set +x 00:05:35.722 ************************************ 00:05:35.722 START TEST thread 00:05:35.722 ************************************ 00:05:35.722 19:21:25 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:35.722 * Looking for test storage... 00:05:35.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:35.722 19:21:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:35.722 19:21:25 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:35.722 19:21:25 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.722 19:21:25 thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.722 ************************************ 00:05:35.722 START TEST thread_poller_perf 00:05:35.722 ************************************ 00:05:35.722 19:21:25 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:35.722 [2024-07-15 19:21:25.425930] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:35.722 [2024-07-15 19:21:25.426242] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63509 ] 00:05:35.980 [2024-07-15 19:21:25.564787] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.981 [2024-07-15 19:21:25.635709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.981 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:36.915 ====================================== 00:05:36.915 busy:2209134822 (cyc) 00:05:36.915 total_run_count: 288000 00:05:36.915 tsc_hz: 2200000000 (cyc) 00:05:36.915 ====================================== 00:05:36.915 poller_cost: 7670 (cyc), 3486 (nsec) 00:05:36.915 00:05:36.915 real 0m1.306s 00:05:36.915 user 0m1.157s 00:05:36.915 sys 0m0.041s 00:05:36.915 19:21:26 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.915 19:21:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.915 ************************************ 00:05:36.915 END TEST thread_poller_perf 00:05:36.915 ************************************ 00:05:37.173 19:21:26 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:37.173 19:21:26 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:37.173 19:21:26 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:37.173 19:21:26 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.173 19:21:26 thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.173 ************************************ 00:05:37.173 START TEST thread_poller_perf 00:05:37.173 ************************************ 00:05:37.173 19:21:26 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:37.173 [2024-07-15 19:21:26.786999] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:37.173 [2024-07-15 19:21:26.787095] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63544 ] 00:05:37.173 [2024-07-15 19:21:26.924749] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.431 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:37.431 [2024-07-15 19:21:26.983469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.365 ====================================== 00:05:38.365 busy:2202037206 (cyc) 00:05:38.365 total_run_count: 4148000 00:05:38.365 tsc_hz: 2200000000 (cyc) 00:05:38.365 ====================================== 00:05:38.365 poller_cost: 530 (cyc), 240 (nsec) 00:05:38.365 ************************************ 00:05:38.365 END TEST thread_poller_perf 00:05:38.365 ************************************ 00:05:38.365 00:05:38.365 real 0m1.290s 00:05:38.365 user 0m1.139s 00:05:38.365 sys 0m0.044s 00:05:38.365 19:21:28 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.365 19:21:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:38.365 19:21:28 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:38.366 19:21:28 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:38.366 ************************************ 00:05:38.366 END TEST thread 00:05:38.366 ************************************ 00:05:38.366 00:05:38.366 real 0m2.776s 00:05:38.366 user 0m2.359s 00:05:38.366 sys 0m0.193s 00:05:38.366 19:21:28 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.366 19:21:28 thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.366 19:21:28 -- common/autotest_common.sh@1142 -- # return 0 00:05:38.366 19:21:28 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:38.366 19:21:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.366 19:21:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.366 19:21:28 -- common/autotest_common.sh@10 -- # set +x 00:05:38.366 ************************************ 00:05:38.366 START TEST accel 00:05:38.366 ************************************ 00:05:38.366 19:21:28 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:38.624 * Looking for test storage... 00:05:38.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:38.624 19:21:28 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:38.624 19:21:28 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:38.624 19:21:28 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:38.624 19:21:28 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=63614 00:05:38.624 19:21:28 accel -- accel/accel.sh@63 -- # waitforlisten 63614 00:05:38.624 19:21:28 accel -- common/autotest_common.sh@829 -- # '[' -z 63614 ']' 00:05:38.624 19:21:28 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.624 19:21:28 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.624 19:21:28 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.624 19:21:28 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:38.624 19:21:28 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.624 19:21:28 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:38.624 19:21:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.624 19:21:28 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.624 19:21:28 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.624 19:21:28 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.624 19:21:28 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.624 19:21:28 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.624 19:21:28 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:38.624 19:21:28 accel -- accel/accel.sh@41 -- # jq -r . 00:05:38.624 [2024-07-15 19:21:28.311391] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:38.624 [2024-07-15 19:21:28.311553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63614 ] 00:05:38.884 [2024-07-15 19:21:28.451227] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.884 [2024-07-15 19:21:28.511283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.823 19:21:29 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.823 19:21:29 accel -- common/autotest_common.sh@862 -- # return 0 00:05:39.823 19:21:29 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:39.823 19:21:29 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:39.823 19:21:29 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:39.823 19:21:29 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:39.823 19:21:29 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:39.823 19:21:29 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:39.823 19:21:29 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.823 19:21:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.823 19:21:29 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:39.823 19:21:29 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.823 19:21:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.823 19:21:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.823 19:21:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.823 19:21:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.823 19:21:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.823 19:21:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.823 19:21:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.823 19:21:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.823 19:21:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.823 19:21:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.823 19:21:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.823 19:21:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.823 19:21:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.823 19:21:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.823 19:21:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.823 19:21:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.823 19:21:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.823 19:21:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.823 19:21:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.823 19:21:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.823 19:21:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.823 19:21:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.823 19:21:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.823 19:21:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.823 19:21:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.823 19:21:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.823 19:21:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.823 19:21:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.823 19:21:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # IFS== 00:05:39.823 19:21:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:39.823 19:21:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:39.823 19:21:29 accel -- accel/accel.sh@75 -- # killprocess 63614 00:05:39.823 19:21:29 accel -- common/autotest_common.sh@948 -- # '[' -z 63614 ']' 00:05:39.823 19:21:29 accel -- common/autotest_common.sh@952 -- # kill -0 63614 00:05:39.823 19:21:29 accel -- common/autotest_common.sh@953 -- # uname 00:05:39.823 19:21:29 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.823 19:21:29 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63614 00:05:39.823 19:21:29 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.823 killing process with pid 63614 00:05:39.823 19:21:29 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.823 19:21:29 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63614' 00:05:39.823 19:21:29 accel -- common/autotest_common.sh@967 -- # kill 63614 00:05:39.823 19:21:29 accel -- common/autotest_common.sh@972 -- # wait 63614 00:05:40.082 19:21:29 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:40.082 19:21:29 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:40.082 19:21:29 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:40.082 19:21:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.082 19:21:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.082 19:21:29 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:40.082 19:21:29 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:40.082 19:21:29 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:40.082 19:21:29 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.082 19:21:29 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.082 19:21:29 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.082 19:21:29 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.082 19:21:29 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.082 19:21:29 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:40.082 19:21:29 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:40.082 19:21:29 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.082 19:21:29 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:40.082 19:21:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.082 19:21:29 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:40.082 19:21:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:40.082 19:21:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.082 19:21:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.082 ************************************ 00:05:40.082 START TEST accel_missing_filename 00:05:40.082 ************************************ 00:05:40.082 19:21:29 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:40.082 19:21:29 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:40.082 19:21:29 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:40.082 19:21:29 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:40.082 19:21:29 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.082 19:21:29 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:40.082 19:21:29 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.082 19:21:29 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:40.082 19:21:29 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:40.082 19:21:29 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:40.082 19:21:29 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.082 19:21:29 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.082 19:21:29 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.082 19:21:29 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.082 19:21:29 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.082 19:21:29 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:40.082 19:21:29 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:40.082 [2024-07-15 19:21:29.754541] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:40.082 [2024-07-15 19:21:29.754617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63683 ] 00:05:40.340 [2024-07-15 19:21:29.891425] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.340 [2024-07-15 19:21:29.965480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.340 [2024-07-15 19:21:30.001905] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.340 [2024-07-15 19:21:30.047828] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:40.340 A filename is required. 00:05:40.340 19:21:30 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:40.340 19:21:30 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.340 19:21:30 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:40.340 19:21:30 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:40.340 19:21:30 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:40.340 19:21:30 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.340 00:05:40.340 real 0m0.402s 00:05:40.340 user 0m0.268s 00:05:40.340 sys 0m0.078s 00:05:40.340 19:21:30 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.340 19:21:30 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:40.340 ************************************ 00:05:40.340 END TEST accel_missing_filename 00:05:40.340 ************************************ 00:05:40.598 19:21:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.598 19:21:30 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:40.598 19:21:30 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:40.598 19:21:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.598 19:21:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.598 ************************************ 00:05:40.598 START TEST accel_compress_verify 00:05:40.598 ************************************ 00:05:40.598 19:21:30 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:40.598 19:21:30 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:40.598 19:21:30 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:40.598 19:21:30 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:40.598 19:21:30 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.598 19:21:30 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:40.598 19:21:30 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.598 19:21:30 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:40.598 19:21:30 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:40.598 19:21:30 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:40.598 19:21:30 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.598 19:21:30 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.598 19:21:30 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.598 19:21:30 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.598 19:21:30 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.598 19:21:30 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:40.598 19:21:30 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:40.598 [2024-07-15 19:21:30.205495] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:40.598 [2024-07-15 19:21:30.205593] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63707 ] 00:05:40.598 [2024-07-15 19:21:30.341805] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.857 [2024-07-15 19:21:30.415476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.857 [2024-07-15 19:21:30.451894] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.857 [2024-07-15 19:21:30.495384] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:40.857 00:05:40.857 Compression does not support the verify option, aborting. 00:05:40.857 19:21:30 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:40.857 19:21:30 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.857 19:21:30 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:40.857 19:21:30 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:40.857 19:21:30 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:40.857 19:21:30 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.857 00:05:40.857 real 0m0.396s 00:05:40.857 user 0m0.271s 00:05:40.857 sys 0m0.092s 00:05:40.857 ************************************ 00:05:40.857 END TEST accel_compress_verify 00:05:40.857 ************************************ 00:05:40.857 19:21:30 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.857 19:21:30 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:40.857 19:21:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.857 19:21:30 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:40.857 19:21:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:40.857 19:21:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.857 19:21:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.857 ************************************ 00:05:40.857 START TEST accel_wrong_workload 00:05:40.857 ************************************ 00:05:40.857 19:21:30 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:40.857 19:21:30 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:40.857 19:21:30 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:40.857 19:21:30 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:40.857 19:21:30 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.857 19:21:30 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:40.857 19:21:30 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.857 19:21:30 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:40.857 19:21:30 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:40.857 19:21:30 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:40.857 19:21:30 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.857 19:21:30 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.857 19:21:30 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.857 19:21:30 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.857 19:21:30 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.857 19:21:30 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:40.857 19:21:30 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:40.857 Unsupported workload type: foobar 00:05:40.857 [2024-07-15 19:21:30.645531] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:40.857 accel_perf options: 00:05:40.857 [-h help message] 00:05:40.857 [-q queue depth per core] 00:05:40.857 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:40.857 [-T number of threads per core 00:05:40.857 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:40.857 [-t time in seconds] 00:05:40.857 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:40.857 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:40.857 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:40.857 [-l for compress/decompress workloads, name of uncompressed input file 00:05:40.857 [-S for crc32c workload, use this seed value (default 0) 00:05:40.857 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:40.857 [-f for fill workload, use this BYTE value (default 255) 00:05:40.857 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:40.857 [-y verify result if this switch is on] 00:05:40.857 [-a tasks to allocate per core (default: same value as -q)] 00:05:40.857 Can be used to spread operations across a wider range of memory. 00:05:40.857 19:21:30 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:40.857 ************************************ 00:05:40.857 END TEST accel_wrong_workload 00:05:40.857 ************************************ 00:05:40.857 19:21:30 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.857 19:21:30 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:40.857 19:21:30 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.857 00:05:40.857 real 0m0.029s 00:05:40.857 user 0m0.016s 00:05:40.857 sys 0m0.013s 00:05:40.857 19:21:30 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.857 19:21:30 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:41.115 19:21:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.115 19:21:30 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:41.115 19:21:30 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:41.115 19:21:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.115 19:21:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.115 ************************************ 00:05:41.115 START TEST accel_negative_buffers 00:05:41.115 ************************************ 00:05:41.115 19:21:30 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:41.115 19:21:30 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:41.115 19:21:30 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:41.115 19:21:30 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:41.115 19:21:30 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.115 19:21:30 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:41.115 19:21:30 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.115 19:21:30 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:41.115 19:21:30 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:41.115 19:21:30 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:41.115 19:21:30 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.115 19:21:30 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.115 19:21:30 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.115 19:21:30 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.115 19:21:30 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.115 19:21:30 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:41.116 19:21:30 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:41.116 -x option must be non-negative. 00:05:41.116 [2024-07-15 19:21:30.719497] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:41.116 accel_perf options: 00:05:41.116 [-h help message] 00:05:41.116 [-q queue depth per core] 00:05:41.116 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:41.116 [-T number of threads per core 00:05:41.116 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:41.116 [-t time in seconds] 00:05:41.116 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:41.116 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:41.116 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:41.116 [-l for compress/decompress workloads, name of uncompressed input file 00:05:41.116 [-S for crc32c workload, use this seed value (default 0) 00:05:41.116 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:41.116 [-f for fill workload, use this BYTE value (default 255) 00:05:41.116 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:41.116 [-y verify result if this switch is on] 00:05:41.116 [-a tasks to allocate per core (default: same value as -q)] 00:05:41.116 Can be used to spread operations across a wider range of memory. 00:05:41.116 19:21:30 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:41.116 19:21:30 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:41.116 19:21:30 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:41.116 19:21:30 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:41.116 00:05:41.116 real 0m0.030s 00:05:41.116 user 0m0.017s 00:05:41.116 sys 0m0.012s 00:05:41.116 19:21:30 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.116 ************************************ 00:05:41.116 END TEST accel_negative_buffers 00:05:41.116 ************************************ 00:05:41.116 19:21:30 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:41.116 19:21:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.116 19:21:30 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:41.116 19:21:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:41.116 19:21:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.116 19:21:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.116 ************************************ 00:05:41.116 START TEST accel_crc32c 00:05:41.116 ************************************ 00:05:41.116 19:21:30 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:41.116 19:21:30 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:41.116 19:21:30 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:41.116 19:21:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.116 19:21:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.116 19:21:30 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:41.116 19:21:30 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:41.116 19:21:30 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:41.116 19:21:30 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.116 19:21:30 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.116 19:21:30 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.116 19:21:30 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.116 19:21:30 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.116 19:21:30 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:41.116 19:21:30 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:41.116 [2024-07-15 19:21:30.802276] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:41.116 [2024-07-15 19:21:30.802433] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63766 ] 00:05:41.374 [2024-07-15 19:21:30.941283] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.374 [2024-07-15 19:21:31.025986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.374 19:21:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:42.750 19:21:32 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.750 00:05:42.750 real 0m1.414s 00:05:42.751 user 0m1.228s 00:05:42.751 sys 0m0.091s 00:05:42.751 ************************************ 00:05:42.751 END TEST accel_crc32c 00:05:42.751 19:21:32 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.751 19:21:32 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:42.751 ************************************ 00:05:42.751 19:21:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.751 19:21:32 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:42.751 19:21:32 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:42.751 19:21:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.751 19:21:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.751 ************************************ 00:05:42.751 START TEST accel_crc32c_C2 00:05:42.751 ************************************ 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:42.751 [2024-07-15 19:21:32.266517] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:42.751 [2024-07-15 19:21:32.266614] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63800 ] 00:05:42.751 [2024-07-15 19:21:32.406960] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.751 [2024-07-15 19:21:32.479180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.751 19:21:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.123 00:05:44.123 real 0m1.396s 00:05:44.123 user 0m1.213s 00:05:44.123 sys 0m0.088s 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.123 19:21:33 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:44.123 ************************************ 00:05:44.123 END TEST accel_crc32c_C2 00:05:44.123 ************************************ 00:05:44.123 19:21:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.123 19:21:33 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:44.123 19:21:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:44.123 19:21:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.123 19:21:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.123 ************************************ 00:05:44.123 START TEST accel_copy 00:05:44.123 ************************************ 00:05:44.123 19:21:33 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:44.123 19:21:33 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:44.123 19:21:33 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:44.123 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.123 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.123 19:21:33 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:44.123 19:21:33 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:44.123 19:21:33 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:44.123 19:21:33 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.123 19:21:33 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.123 19:21:33 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.123 19:21:33 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.123 19:21:33 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.123 19:21:33 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:44.123 19:21:33 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:44.123 [2024-07-15 19:21:33.713262] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:44.123 [2024-07-15 19:21:33.713351] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63835 ] 00:05:44.123 [2024-07-15 19:21:33.846964] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.123 [2024-07-15 19:21:33.911384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.382 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.383 19:21:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:45.318 19:21:35 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.318 00:05:45.318 real 0m1.370s 00:05:45.318 user 0m1.208s 00:05:45.318 sys 0m0.067s 00:05:45.318 19:21:35 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.318 19:21:35 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:45.318 ************************************ 00:05:45.318 END TEST accel_copy 00:05:45.318 ************************************ 00:05:45.318 19:21:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.318 19:21:35 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:45.318 19:21:35 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:45.318 19:21:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.318 19:21:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.318 ************************************ 00:05:45.318 START TEST accel_fill 00:05:45.318 ************************************ 00:05:45.318 19:21:35 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:45.318 19:21:35 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:45.318 19:21:35 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:45.318 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.318 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.318 19:21:35 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:45.318 19:21:35 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:45.318 19:21:35 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:45.318 19:21:35 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.318 19:21:35 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.318 19:21:35 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.318 19:21:35 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.318 19:21:35 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.318 19:21:35 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:45.318 19:21:35 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:45.581 [2024-07-15 19:21:35.132124] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:45.581 [2024-07-15 19:21:35.132242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63869 ] 00:05:45.581 [2024-07-15 19:21:35.271957] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.581 [2024-07-15 19:21:35.346055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.581 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.840 19:21:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:46.777 19:21:36 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.777 00:05:46.777 real 0m1.392s 00:05:46.777 user 0m1.223s 00:05:46.777 sys 0m0.072s 00:05:46.777 19:21:36 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.777 19:21:36 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:46.777 ************************************ 00:05:46.777 END TEST accel_fill 00:05:46.777 ************************************ 00:05:46.777 19:21:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:46.777 19:21:36 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:46.777 19:21:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:46.777 19:21:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.777 19:21:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.777 ************************************ 00:05:46.777 START TEST accel_copy_crc32c 00:05:46.777 ************************************ 00:05:46.777 19:21:36 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:46.777 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:46.777 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:46.777 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.777 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.777 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:46.777 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:46.777 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:46.777 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.777 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.777 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.777 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.777 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.777 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:46.777 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:46.777 [2024-07-15 19:21:36.574289] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:46.777 [2024-07-15 19:21:36.574395] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63904 ] 00:05:47.035 [2024-07-15 19:21:36.711882] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.035 [2024-07-15 19:21:36.783005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.035 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.036 19:21:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.411 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.411 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.411 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.411 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.411 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.411 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.411 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.411 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.411 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.411 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.411 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.412 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.412 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.412 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.412 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.412 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.412 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.412 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.412 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.412 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.412 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.412 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.412 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.412 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.412 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.412 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:48.412 19:21:37 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.412 00:05:48.412 real 0m1.396s 00:05:48.412 user 0m1.227s 00:05:48.412 sys 0m0.076s 00:05:48.412 19:21:37 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.412 19:21:37 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:48.412 ************************************ 00:05:48.412 END TEST accel_copy_crc32c 00:05:48.412 ************************************ 00:05:48.412 19:21:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.412 19:21:37 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:48.412 19:21:37 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:48.412 19:21:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.412 19:21:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.412 ************************************ 00:05:48.412 START TEST accel_copy_crc32c_C2 00:05:48.412 ************************************ 00:05:48.412 19:21:37 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:48.412 19:21:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:48.412 19:21:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:48.412 19:21:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.412 19:21:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.412 19:21:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:48.412 19:21:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:48.412 19:21:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.412 19:21:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.412 19:21:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.412 19:21:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.412 19:21:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.412 19:21:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.412 19:21:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:48.412 19:21:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:48.412 [2024-07-15 19:21:38.012304] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:48.412 [2024-07-15 19:21:38.012419] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63933 ] 00:05:48.412 [2024-07-15 19:21:38.145433] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.412 [2024-07-15 19:21:38.206627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.670 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.671 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.671 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.671 19:21:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.603 00:05:49.603 real 0m1.392s 00:05:49.603 user 0m1.218s 00:05:49.603 sys 0m0.081s 00:05:49.603 ************************************ 00:05:49.603 END TEST accel_copy_crc32c_C2 00:05:49.603 ************************************ 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.603 19:21:39 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:49.861 19:21:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.861 19:21:39 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:49.861 19:21:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:49.861 19:21:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.861 19:21:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.861 ************************************ 00:05:49.861 START TEST accel_dualcast 00:05:49.861 ************************************ 00:05:49.861 19:21:39 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:49.861 19:21:39 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:49.861 19:21:39 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:49.861 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:49.861 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:49.861 19:21:39 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:49.861 19:21:39 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:49.861 19:21:39 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:49.861 19:21:39 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.861 19:21:39 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.861 19:21:39 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.861 19:21:39 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.861 19:21:39 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.861 19:21:39 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:49.861 19:21:39 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:49.861 [2024-07-15 19:21:39.458014] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:49.861 [2024-07-15 19:21:39.458117] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63967 ] 00:05:49.861 [2024-07-15 19:21:39.596072] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.861 [2024-07-15 19:21:39.657617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.167 19:21:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.168 19:21:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:51.100 19:21:40 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.100 00:05:51.100 real 0m1.382s 00:05:51.100 user 0m0.013s 00:05:51.100 sys 0m0.000s 00:05:51.100 19:21:40 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.100 19:21:40 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:51.100 ************************************ 00:05:51.100 END TEST accel_dualcast 00:05:51.100 ************************************ 00:05:51.100 19:21:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.100 19:21:40 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:51.100 19:21:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:51.100 19:21:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.100 19:21:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.100 ************************************ 00:05:51.100 START TEST accel_compare 00:05:51.100 ************************************ 00:05:51.100 19:21:40 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:51.100 19:21:40 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:51.100 19:21:40 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:51.100 19:21:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.100 19:21:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.100 19:21:40 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:51.100 19:21:40 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:51.100 19:21:40 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:51.100 19:21:40 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.100 19:21:40 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.100 19:21:40 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.100 19:21:40 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.100 19:21:40 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.100 19:21:40 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:51.100 19:21:40 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:51.100 [2024-07-15 19:21:40.892563] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:51.100 [2024-07-15 19:21:40.892670] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64004 ] 00:05:51.358 [2024-07-15 19:21:41.030846] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.358 [2024-07-15 19:21:41.093029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.358 19:21:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:52.735 19:21:42 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.735 00:05:52.735 real 0m1.388s 00:05:52.735 user 0m1.214s 00:05:52.735 sys 0m0.080s 00:05:52.735 19:21:42 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.735 19:21:42 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:52.735 ************************************ 00:05:52.735 END TEST accel_compare 00:05:52.735 ************************************ 00:05:52.735 19:21:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.735 19:21:42 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:52.735 19:21:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:52.735 19:21:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.735 19:21:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.735 ************************************ 00:05:52.735 START TEST accel_xor 00:05:52.735 ************************************ 00:05:52.735 19:21:42 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:52.735 19:21:42 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:52.735 19:21:42 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:52.735 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.735 19:21:42 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:52.735 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.735 19:21:42 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:52.735 19:21:42 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:52.735 19:21:42 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.735 19:21:42 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.735 19:21:42 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.735 19:21:42 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.735 19:21:42 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.735 19:21:42 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:52.735 19:21:42 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:52.735 [2024-07-15 19:21:42.322838] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:52.735 [2024-07-15 19:21:42.322956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64033 ] 00:05:52.735 [2024-07-15 19:21:42.455808] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.735 [2024-07-15 19:21:42.517846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.994 19:21:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.930 ************************************ 00:05:53.930 END TEST accel_xor 00:05:53.930 ************************************ 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.930 00:05:53.930 real 0m1.370s 00:05:53.930 user 0m1.189s 00:05:53.930 sys 0m0.087s 00:05:53.930 19:21:43 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.930 19:21:43 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:53.930 19:21:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.930 19:21:43 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:53.930 19:21:43 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:53.930 19:21:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.930 19:21:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.930 ************************************ 00:05:53.930 START TEST accel_xor 00:05:53.930 ************************************ 00:05:53.930 19:21:43 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:53.930 19:21:43 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:54.190 [2024-07-15 19:21:43.747871] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:54.190 [2024-07-15 19:21:43.747977] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64073 ] 00:05:54.190 [2024-07-15 19:21:43.883927] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.190 [2024-07-15 19:21:43.943563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.190 19:21:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:55.565 19:21:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.565 00:05:55.565 real 0m1.375s 00:05:55.565 user 0m1.206s 00:05:55.565 sys 0m0.074s 00:05:55.565 ************************************ 00:05:55.565 END TEST accel_xor 00:05:55.565 ************************************ 00:05:55.565 19:21:45 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.565 19:21:45 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:55.565 19:21:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.565 19:21:45 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:55.565 19:21:45 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:55.565 19:21:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.565 19:21:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.565 ************************************ 00:05:55.565 START TEST accel_dif_verify 00:05:55.565 ************************************ 00:05:55.565 19:21:45 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:55.565 19:21:45 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:55.565 19:21:45 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:55.565 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.565 19:21:45 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:55.565 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.565 19:21:45 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:55.565 19:21:45 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:55.565 19:21:45 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.565 19:21:45 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.565 19:21:45 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.565 19:21:45 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.565 19:21:45 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.565 19:21:45 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:55.565 19:21:45 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:55.565 [2024-07-15 19:21:45.174939] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:55.565 [2024-07-15 19:21:45.175072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64102 ] 00:05:55.565 [2024-07-15 19:21:45.310381] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.824 [2024-07-15 19:21:45.369822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.824 19:21:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 19:21:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.778 19:21:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 19:21:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 19:21:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 19:21:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.778 19:21:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 19:21:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 19:21:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 19:21:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.778 19:21:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 19:21:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 19:21:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 19:21:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.778 19:21:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 19:21:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 19:21:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 19:21:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.778 19:21:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.779 19:21:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.779 19:21:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.779 19:21:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.779 19:21:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.779 19:21:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.779 19:21:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.779 19:21:46 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.779 19:21:46 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:56.779 19:21:46 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.779 00:05:56.779 real 0m1.377s 00:05:56.779 user 0m1.215s 00:05:56.779 sys 0m0.070s 00:05:56.779 19:21:46 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.779 19:21:46 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:56.779 ************************************ 00:05:56.779 END TEST accel_dif_verify 00:05:56.779 ************************************ 00:05:56.779 19:21:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.779 19:21:46 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:56.779 19:21:46 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:56.779 19:21:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.779 19:21:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.779 ************************************ 00:05:56.779 START TEST accel_dif_generate 00:05:56.779 ************************************ 00:05:56.779 19:21:46 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:56.779 19:21:46 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:56.779 19:21:46 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:56.779 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.779 19:21:46 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:56.779 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.779 19:21:46 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:56.779 19:21:46 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:56.779 19:21:46 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.779 19:21:46 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.779 19:21:46 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.779 19:21:46 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.779 19:21:46 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.779 19:21:46 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:56.779 19:21:46 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:57.038 [2024-07-15 19:21:46.603849] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:57.038 [2024-07-15 19:21:46.603980] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64131 ] 00:05:57.038 [2024-07-15 19:21:46.745058] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.038 [2024-07-15 19:21:46.817485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.297 19:21:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.232 ************************************ 00:05:58.232 END TEST accel_dif_generate 00:05:58.232 ************************************ 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:58.232 19:21:47 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.232 00:05:58.232 real 0m1.392s 00:05:58.232 user 0m1.214s 00:05:58.232 sys 0m0.085s 00:05:58.232 19:21:47 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.232 19:21:47 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:58.232 19:21:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.232 19:21:48 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:58.232 19:21:48 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:58.232 19:21:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.232 19:21:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.232 ************************************ 00:05:58.232 START TEST accel_dif_generate_copy 00:05:58.232 ************************************ 00:05:58.232 19:21:48 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:58.232 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:58.232 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:58.232 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.232 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.232 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:58.232 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:58.232 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:58.232 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.232 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.232 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.232 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.232 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.232 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:58.232 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:58.491 [2024-07-15 19:21:48.051893] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:58.491 [2024-07-15 19:21:48.052032] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64171 ] 00:05:58.491 [2024-07-15 19:21:48.197627] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.491 [2024-07-15 19:21:48.268659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:58.749 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.750 19:21:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.709 00:05:59.709 real 0m1.396s 00:05:59.709 user 0m1.211s 00:05:59.709 sys 0m0.091s 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.709 ************************************ 00:05:59.709 END TEST accel_dif_generate_copy 00:05:59.709 ************************************ 00:05:59.709 19:21:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:59.709 19:21:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.709 19:21:49 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:59.709 19:21:49 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:59.709 19:21:49 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:59.709 19:21:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.709 19:21:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.709 ************************************ 00:05:59.709 START TEST accel_comp 00:05:59.709 ************************************ 00:05:59.709 19:21:49 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:59.710 19:21:49 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:59.710 19:21:49 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:59.710 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.710 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.710 19:21:49 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:59.710 19:21:49 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:59.710 19:21:49 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:59.710 19:21:49 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.710 19:21:49 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.710 19:21:49 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.710 19:21:49 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.710 19:21:49 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.710 19:21:49 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:59.710 19:21:49 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:59.710 [2024-07-15 19:21:49.486409] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:59.710 [2024-07-15 19:21:49.486498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64200 ] 00:05:59.968 [2024-07-15 19:21:49.619249] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.968 [2024-07-15 19:21:49.677409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 19:21:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:01.339 19:21:50 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.339 00:06:01.339 real 0m1.365s 00:06:01.339 user 0m1.199s 00:06:01.339 sys 0m0.076s 00:06:01.339 19:21:50 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.339 19:21:50 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:01.339 ************************************ 00:06:01.339 END TEST accel_comp 00:06:01.339 ************************************ 00:06:01.339 19:21:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.339 19:21:50 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:01.339 19:21:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:01.339 19:21:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.339 19:21:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.339 ************************************ 00:06:01.339 START TEST accel_decomp 00:06:01.339 ************************************ 00:06:01.339 19:21:50 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:01.339 19:21:50 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:01.339 19:21:50 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:01.339 19:21:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:50 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:01.339 19:21:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:50 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:01.339 19:21:50 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:01.339 19:21:50 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.339 19:21:50 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.339 19:21:50 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.339 19:21:50 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.339 19:21:50 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.339 19:21:50 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:01.339 19:21:50 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:01.339 [2024-07-15 19:21:50.891781] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:01.339 [2024-07-15 19:21:50.891872] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64242 ] 00:06:01.339 [2024-07-15 19:21:51.023648] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.339 [2024-07-15 19:21:51.086723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.339 19:21:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:02.711 19:21:52 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.711 00:06:02.711 real 0m1.367s 00:06:02.711 user 0m1.205s 00:06:02.711 sys 0m0.073s 00:06:02.711 19:21:52 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.711 ************************************ 00:06:02.711 END TEST accel_decomp 00:06:02.711 ************************************ 00:06:02.711 19:21:52 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:02.711 19:21:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.711 19:21:52 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:02.711 19:21:52 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:02.711 19:21:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.711 19:21:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.711 ************************************ 00:06:02.711 START TEST accel_decomp_full 00:06:02.711 ************************************ 00:06:02.711 19:21:52 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:02.711 19:21:52 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:02.711 19:21:52 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:02.711 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.711 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.711 19:21:52 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:02.711 19:21:52 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:02.711 19:21:52 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:02.711 19:21:52 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.711 19:21:52 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.711 19:21:52 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.711 19:21:52 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.711 19:21:52 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.711 19:21:52 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:02.711 19:21:52 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:02.711 [2024-07-15 19:21:52.315745] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:02.711 [2024-07-15 19:21:52.315846] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64271 ] 00:06:02.711 [2024-07-15 19:21:52.452586] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.969 [2024-07-15 19:21:52.519748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.969 19:21:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:03.903 19:21:53 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.903 00:06:03.903 real 0m1.397s 00:06:03.903 user 0m1.233s 00:06:03.903 sys 0m0.071s 00:06:03.903 19:21:53 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.903 19:21:53 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:03.903 ************************************ 00:06:03.903 END TEST accel_decomp_full 00:06:03.903 ************************************ 00:06:04.161 19:21:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.161 19:21:53 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:04.161 19:21:53 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:04.161 19:21:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.161 19:21:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.161 ************************************ 00:06:04.161 START TEST accel_decomp_mcore 00:06:04.161 ************************************ 00:06:04.161 19:21:53 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:04.161 19:21:53 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:04.161 19:21:53 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:04.161 19:21:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.161 19:21:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.161 19:21:53 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:04.161 19:21:53 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:04.161 19:21:53 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:04.161 19:21:53 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.161 19:21:53 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.161 19:21:53 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.161 19:21:53 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.161 19:21:53 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.161 19:21:53 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:04.161 19:21:53 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:04.161 [2024-07-15 19:21:53.760080] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:04.161 [2024-07-15 19:21:53.760183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64306 ] 00:06:04.161 [2024-07-15 19:21:53.900331] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.418 [2024-07-15 19:21:53.966576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.418 [2024-07-15 19:21:53.966713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.418 [2024-07-15 19:21:53.966812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.418 [2024-07-15 19:21:53.966817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.418 19:21:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.418 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.419 19:21:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.350 00:06:05.350 real 0m1.388s 00:06:05.350 user 0m0.017s 00:06:05.350 sys 0m0.002s 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.350 19:21:55 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:05.350 ************************************ 00:06:05.350 END TEST accel_decomp_mcore 00:06:05.350 ************************************ 00:06:05.608 19:21:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.608 19:21:55 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:05.608 19:21:55 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:05.608 19:21:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.608 19:21:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.608 ************************************ 00:06:05.608 START TEST accel_decomp_full_mcore 00:06:05.608 ************************************ 00:06:05.608 19:21:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:05.608 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:05.608 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:05.608 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.608 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:05.608 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.608 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:05.608 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:05.608 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.608 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.608 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.608 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.608 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.608 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:05.608 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:05.608 [2024-07-15 19:21:55.188029] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:05.608 [2024-07-15 19:21:55.188126] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64343 ] 00:06:05.608 [2024-07-15 19:21:55.321238] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:05.608 [2024-07-15 19:21:55.391068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.608 [2024-07-15 19:21:55.391183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.608 [2024-07-15 19:21:55.391286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.608 [2024-07-15 19:21:55.391286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.874 19:21:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.807 00:06:06.807 real 0m1.398s 00:06:06.807 user 0m4.482s 00:06:06.807 sys 0m0.080s 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.807 19:21:56 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:06.807 ************************************ 00:06:06.807 END TEST accel_decomp_full_mcore 00:06:06.807 ************************************ 00:06:06.807 19:21:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.807 19:21:56 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:06.807 19:21:56 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:06.807 19:21:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.807 19:21:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.065 ************************************ 00:06:07.065 START TEST accel_decomp_mthread 00:06:07.065 ************************************ 00:06:07.065 19:21:56 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:07.065 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:07.065 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:07.065 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.065 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:07.065 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.065 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:07.065 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:07.065 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.065 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.065 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.065 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.065 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.065 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:07.065 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:07.065 [2024-07-15 19:21:56.635436] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:07.065 [2024-07-15 19:21:56.635525] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64375 ] 00:06:07.065 [2024-07-15 19:21:56.767338] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.065 [2024-07-15 19:21:56.840823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.323 19:21:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 ************************************ 00:06:08.297 END TEST accel_decomp_mthread 00:06:08.297 ************************************ 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.297 00:06:08.297 real 0m1.398s 00:06:08.297 user 0m1.227s 00:06:08.297 sys 0m0.079s 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.297 19:21:58 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:08.297 19:21:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.297 19:21:58 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:08.297 19:21:58 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:08.297 19:21:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.297 19:21:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.297 ************************************ 00:06:08.297 START TEST accel_decomp_full_mthread 00:06:08.297 ************************************ 00:06:08.297 19:21:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:08.297 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:08.297 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:08.297 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:08.297 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:08.297 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:08.297 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.297 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.297 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.297 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.297 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.297 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:08.297 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:08.297 [2024-07-15 19:21:58.085194] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:08.297 [2024-07-15 19:21:58.085292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64410 ] 00:06:08.555 [2024-07-15 19:21:58.223827] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.555 [2024-07-15 19:21:58.295738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.555 19:21:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.926 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.926 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.926 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.926 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.926 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.926 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.926 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.926 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.926 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.926 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.926 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.926 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.927 00:06:09.927 real 0m1.432s 00:06:09.927 user 0m1.256s 00:06:09.927 sys 0m0.085s 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.927 19:21:59 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:09.927 ************************************ 00:06:09.927 END TEST accel_decomp_full_mthread 00:06:09.927 ************************************ 00:06:09.927 19:21:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.927 19:21:59 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:09.927 19:21:59 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:09.927 19:21:59 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:09.927 19:21:59 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:09.927 19:21:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.927 19:21:59 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.927 19:21:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.927 19:21:59 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.927 19:21:59 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.927 19:21:59 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.927 19:21:59 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.927 19:21:59 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:09.927 19:21:59 accel -- accel/accel.sh@41 -- # jq -r . 00:06:09.927 ************************************ 00:06:09.927 START TEST accel_dif_functional_tests 00:06:09.927 ************************************ 00:06:09.927 19:21:59 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:09.927 [2024-07-15 19:21:59.595412] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:09.927 [2024-07-15 19:21:59.595536] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64446 ] 00:06:10.185 [2024-07-15 19:21:59.735666] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.185 [2024-07-15 19:21:59.801746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.185 [2024-07-15 19:21:59.801843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.185 [2024-07-15 19:21:59.801850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.185 00:06:10.185 00:06:10.185 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.185 http://cunit.sourceforge.net/ 00:06:10.185 00:06:10.185 00:06:10.185 Suite: accel_dif 00:06:10.185 Test: verify: DIF generated, GUARD check ...passed 00:06:10.185 Test: verify: DIF generated, APPTAG check ...passed 00:06:10.185 Test: verify: DIF generated, REFTAG check ...passed 00:06:10.185 Test: verify: DIF not generated, GUARD check ...passed 00:06:10.185 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 19:21:59.854384] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:10.185 [2024-07-15 19:21:59.854471] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:10.185 passed 00:06:10.185 Test: verify: DIF not generated, REFTAG check ...passed 00:06:10.185 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:10.185 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 19:21:59.854509] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:10.185 [2024-07-15 19:21:59.854582] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:10.185 passed 00:06:10.185 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:10.185 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:10.185 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:10.185 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:10.185 Test: verify copy: DIF generated, GUARD check ...[2024-07-15 19:21:59.854756] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:10.185 passed 00:06:10.185 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:10.185 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:10.185 Test: verify copy: DIF not generated, GUARD check ...passed 00:06:10.185 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 19:21:59.854964] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:10.185 passed 00:06:10.185 Test: verify copy: DIF not generated, REFTAG check ...passed 00:06:10.185 Test: generate copy: DIF generated, GUARD check ...[2024-07-15 19:21:59.855039] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:10.185 [2024-07-15 19:21:59.855081] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:10.185 passed 00:06:10.185 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:10.185 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:10.185 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:10.185 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:10.185 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:10.185 Test: generate copy: iovecs-len validate ...passed 00:06:10.185 Test: generate copy: buffer alignment validate ...[2024-07-15 19:21:59.855373] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:10.185 passed 00:06:10.185 00:06:10.185 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.185 suites 1 1 n/a 0 0 00:06:10.185 tests 26 26 26 0 0 00:06:10.185 asserts 115 115 115 0 n/a 00:06:10.185 00:06:10.185 Elapsed time = 0.004 seconds 00:06:10.443 00:06:10.443 real 0m0.482s 00:06:10.443 user 0m0.547s 00:06:10.443 sys 0m0.112s 00:06:10.443 19:22:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.443 19:22:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:10.443 ************************************ 00:06:10.443 END TEST accel_dif_functional_tests 00:06:10.443 ************************************ 00:06:10.443 19:22:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.443 ************************************ 00:06:10.443 END TEST accel 00:06:10.443 ************************************ 00:06:10.443 00:06:10.443 real 0m31.918s 00:06:10.444 user 0m34.240s 00:06:10.444 sys 0m2.916s 00:06:10.444 19:22:00 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.444 19:22:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.444 19:22:00 -- common/autotest_common.sh@1142 -- # return 0 00:06:10.444 19:22:00 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:10.444 19:22:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.444 19:22:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.444 19:22:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.444 ************************************ 00:06:10.444 START TEST accel_rpc 00:06:10.444 ************************************ 00:06:10.444 19:22:00 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:10.444 * Looking for test storage... 00:06:10.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:10.444 19:22:00 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:10.444 19:22:00 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64511 00:06:10.444 19:22:00 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:10.444 19:22:00 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 64511 00:06:10.444 19:22:00 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 64511 ']' 00:06:10.444 19:22:00 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.444 19:22:00 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.444 19:22:00 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.444 19:22:00 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.444 19:22:00 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.702 [2024-07-15 19:22:00.263998] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:10.702 [2024-07-15 19:22:00.264099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64511 ] 00:06:10.702 [2024-07-15 19:22:00.399001] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.702 [2024-07-15 19:22:00.457487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.702 19:22:00 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.702 19:22:00 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:10.702 19:22:00 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:10.702 19:22:00 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:10.702 19:22:00 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:10.702 19:22:00 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:10.702 19:22:00 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:10.702 19:22:00 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.702 19:22:00 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.702 19:22:00 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.702 ************************************ 00:06:10.702 START TEST accel_assign_opcode 00:06:10.702 ************************************ 00:06:10.702 19:22:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:10.702 19:22:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:10.702 19:22:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.702 19:22:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:10.960 [2024-07-15 19:22:00.509926] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:10.960 19:22:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.960 19:22:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:10.960 19:22:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.960 19:22:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:10.960 [2024-07-15 19:22:00.517896] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:10.960 19:22:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.960 19:22:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:10.960 19:22:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.960 19:22:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:10.960 19:22:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.960 19:22:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:10.960 19:22:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.960 19:22:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:10.960 19:22:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:10.960 19:22:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:10.960 19:22:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.960 software 00:06:10.960 00:06:10.960 real 0m0.206s 00:06:10.960 user 0m0.054s 00:06:10.960 sys 0m0.005s 00:06:10.960 19:22:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.960 ************************************ 00:06:10.960 19:22:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:10.960 END TEST accel_assign_opcode 00:06:10.960 ************************************ 00:06:10.960 19:22:00 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:10.960 19:22:00 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 64511 00:06:10.960 19:22:00 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 64511 ']' 00:06:10.960 19:22:00 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 64511 00:06:10.960 19:22:00 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:10.960 19:22:00 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.960 19:22:00 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64511 00:06:11.218 19:22:00 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:11.218 19:22:00 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:11.218 killing process with pid 64511 00:06:11.218 19:22:00 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64511' 00:06:11.218 19:22:00 accel_rpc -- common/autotest_common.sh@967 -- # kill 64511 00:06:11.218 19:22:00 accel_rpc -- common/autotest_common.sh@972 -- # wait 64511 00:06:11.476 00:06:11.476 real 0m0.916s 00:06:11.476 user 0m0.898s 00:06:11.476 sys 0m0.316s 00:06:11.476 19:22:01 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.476 19:22:01 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.476 ************************************ 00:06:11.476 END TEST accel_rpc 00:06:11.476 ************************************ 00:06:11.476 19:22:01 -- common/autotest_common.sh@1142 -- # return 0 00:06:11.476 19:22:01 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:11.476 19:22:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.476 19:22:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.476 19:22:01 -- common/autotest_common.sh@10 -- # set +x 00:06:11.476 ************************************ 00:06:11.476 START TEST app_cmdline 00:06:11.477 ************************************ 00:06:11.477 19:22:01 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:11.477 * Looking for test storage... 00:06:11.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:11.477 19:22:01 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:11.477 19:22:01 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64603 00:06:11.477 19:22:01 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:11.477 19:22:01 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64603 00:06:11.477 19:22:01 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 64603 ']' 00:06:11.477 19:22:01 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.477 19:22:01 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.477 19:22:01 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.477 19:22:01 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.477 19:22:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:11.477 [2024-07-15 19:22:01.217197] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:11.477 [2024-07-15 19:22:01.217295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64603 ] 00:06:11.735 [2024-07-15 19:22:01.355061] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.735 [2024-07-15 19:22:01.414550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.993 19:22:01 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.993 19:22:01 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:11.993 19:22:01 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:12.251 { 00:06:12.251 "fields": { 00:06:12.251 "commit": "b26ca8289", 00:06:12.251 "major": 24, 00:06:12.251 "minor": 9, 00:06:12.251 "patch": 0, 00:06:12.251 "suffix": "-pre" 00:06:12.251 }, 00:06:12.251 "version": "SPDK v24.09-pre git sha1 b26ca8289" 00:06:12.251 } 00:06:12.251 19:22:01 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:12.251 19:22:01 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:12.251 19:22:01 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:12.251 19:22:01 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:12.251 19:22:01 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:12.251 19:22:01 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:12.251 19:22:01 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.251 19:22:01 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:12.251 19:22:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.251 19:22:01 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.251 19:22:01 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:12.251 19:22:01 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:12.251 19:22:01 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:12.251 19:22:01 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:12.251 19:22:01 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:12.251 19:22:01 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:12.251 19:22:01 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.251 19:22:01 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:12.251 19:22:01 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.251 19:22:01 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:12.251 19:22:01 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.251 19:22:01 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:12.251 19:22:01 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:12.251 19:22:01 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:12.509 2024/07/15 19:22:02 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:12.509 request: 00:06:12.509 { 00:06:12.509 "method": "env_dpdk_get_mem_stats", 00:06:12.509 "params": {} 00:06:12.509 } 00:06:12.509 Got JSON-RPC error response 00:06:12.509 GoRPCClient: error on JSON-RPC call 00:06:12.509 19:22:02 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:12.509 19:22:02 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.509 19:22:02 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:12.509 19:22:02 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.509 19:22:02 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64603 00:06:12.509 19:22:02 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 64603 ']' 00:06:12.509 19:22:02 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 64603 00:06:12.509 19:22:02 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:12.509 19:22:02 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.509 19:22:02 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64603 00:06:12.509 19:22:02 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.509 19:22:02 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.509 killing process with pid 64603 00:06:12.509 19:22:02 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64603' 00:06:12.509 19:22:02 app_cmdline -- common/autotest_common.sh@967 -- # kill 64603 00:06:12.509 19:22:02 app_cmdline -- common/autotest_common.sh@972 -- # wait 64603 00:06:12.767 00:06:12.767 real 0m1.382s 00:06:12.767 user 0m1.794s 00:06:12.767 sys 0m0.357s 00:06:12.767 19:22:02 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.767 19:22:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.767 ************************************ 00:06:12.767 END TEST app_cmdline 00:06:12.767 ************************************ 00:06:12.767 19:22:02 -- common/autotest_common.sh@1142 -- # return 0 00:06:12.767 19:22:02 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:12.767 19:22:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.767 19:22:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.767 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:06:12.767 ************************************ 00:06:12.767 START TEST version 00:06:12.767 ************************************ 00:06:12.767 19:22:02 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:13.025 * Looking for test storage... 00:06:13.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:13.025 19:22:02 version -- app/version.sh@17 -- # get_header_version major 00:06:13.025 19:22:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:13.025 19:22:02 version -- app/version.sh@14 -- # cut -f2 00:06:13.025 19:22:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:13.025 19:22:02 version -- app/version.sh@17 -- # major=24 00:06:13.025 19:22:02 version -- app/version.sh@18 -- # get_header_version minor 00:06:13.025 19:22:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:13.025 19:22:02 version -- app/version.sh@14 -- # cut -f2 00:06:13.025 19:22:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:13.025 19:22:02 version -- app/version.sh@18 -- # minor=9 00:06:13.025 19:22:02 version -- app/version.sh@19 -- # get_header_version patch 00:06:13.025 19:22:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:13.025 19:22:02 version -- app/version.sh@14 -- # cut -f2 00:06:13.025 19:22:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:13.025 19:22:02 version -- app/version.sh@19 -- # patch=0 00:06:13.025 19:22:02 version -- app/version.sh@20 -- # get_header_version suffix 00:06:13.025 19:22:02 version -- app/version.sh@14 -- # cut -f2 00:06:13.025 19:22:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:13.025 19:22:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:13.025 19:22:02 version -- app/version.sh@20 -- # suffix=-pre 00:06:13.025 19:22:02 version -- app/version.sh@22 -- # version=24.9 00:06:13.025 19:22:02 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:13.025 19:22:02 version -- app/version.sh@28 -- # version=24.9rc0 00:06:13.025 19:22:02 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:13.025 19:22:02 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:13.025 19:22:02 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:13.025 19:22:02 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:13.025 00:06:13.025 real 0m0.148s 00:06:13.025 user 0m0.091s 00:06:13.025 sys 0m0.089s 00:06:13.025 19:22:02 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.025 ************************************ 00:06:13.025 19:22:02 version -- common/autotest_common.sh@10 -- # set +x 00:06:13.025 END TEST version 00:06:13.025 ************************************ 00:06:13.025 19:22:02 -- common/autotest_common.sh@1142 -- # return 0 00:06:13.025 19:22:02 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:13.025 19:22:02 -- spdk/autotest.sh@198 -- # uname -s 00:06:13.025 19:22:02 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:13.025 19:22:02 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:13.025 19:22:02 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:13.025 19:22:02 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:13.025 19:22:02 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:13.025 19:22:02 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:13.025 19:22:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:13.025 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:06:13.025 19:22:02 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:13.025 19:22:02 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:13.025 19:22:02 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:13.025 19:22:02 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:13.025 19:22:02 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:13.025 19:22:02 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:13.025 19:22:02 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:13.025 19:22:02 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:13.025 19:22:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.025 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:06:13.025 ************************************ 00:06:13.025 START TEST nvmf_tcp 00:06:13.025 ************************************ 00:06:13.025 19:22:02 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:13.025 * Looking for test storage... 00:06:13.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:13.026 19:22:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:13.026 19:22:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:13.026 19:22:02 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:13.026 19:22:02 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:13.284 19:22:02 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.284 19:22:02 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.284 19:22:02 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.284 19:22:02 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.284 19:22:02 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.284 19:22:02 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.284 19:22:02 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.284 19:22:02 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.284 19:22:02 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.284 19:22:02 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.284 19:22:02 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:06:13.284 19:22:02 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:06:13.284 19:22:02 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.284 19:22:02 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.284 19:22:02 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:13.284 19:22:02 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.284 19:22:02 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:13.284 19:22:02 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.285 19:22:02 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.285 19:22:02 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.285 19:22:02 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.285 19:22:02 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.285 19:22:02 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.285 19:22:02 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:13.285 19:22:02 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.285 19:22:02 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:13.285 19:22:02 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:13.285 19:22:02 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:13.285 19:22:02 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.285 19:22:02 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.285 19:22:02 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.285 19:22:02 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:13.285 19:22:02 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:13.285 19:22:02 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:13.285 19:22:02 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:13.285 19:22:02 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:13.285 19:22:02 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:13.285 19:22:02 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:13.285 19:22:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.285 19:22:02 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:13.285 19:22:02 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:13.285 19:22:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:13.285 19:22:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.285 19:22:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.285 ************************************ 00:06:13.285 START TEST nvmf_example 00:06:13.285 ************************************ 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:13.285 * Looking for test storage... 00:06:13.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:13.285 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:13.286 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:13.286 Cannot find device "nvmf_init_br" 00:06:13.286 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:06:13.286 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:13.286 Cannot find device "nvmf_tgt_br" 00:06:13.286 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:06:13.286 19:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:13.286 Cannot find device "nvmf_tgt_br2" 00:06:13.286 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:06:13.286 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:13.286 Cannot find device "nvmf_init_br" 00:06:13.286 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:06:13.286 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:13.286 Cannot find device "nvmf_tgt_br" 00:06:13.286 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:06:13.286 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:13.286 Cannot find device "nvmf_tgt_br2" 00:06:13.286 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:06:13.286 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:13.286 Cannot find device "nvmf_br" 00:06:13.286 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:06:13.286 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:13.286 Cannot find device "nvmf_init_if" 00:06:13.286 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:06:13.286 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:13.286 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:13.286 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:06:13.286 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:13.286 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:13.286 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:06:13.286 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:13.286 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:13.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:13.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:06:13.544 00:06:13.544 --- 10.0.0.2 ping statistics --- 00:06:13.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.544 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:13.544 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:13.544 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:06:13.544 00:06:13.544 --- 10.0.0.3 ping statistics --- 00:06:13.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.544 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:13.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:13.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:06:13.544 00:06:13.544 --- 10.0.0.1 ping statistics --- 00:06:13.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.544 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:13.544 19:22:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:13.802 19:22:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:13.802 19:22:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:13.802 19:22:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:13.802 19:22:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:13.802 19:22:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:13.802 19:22:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:13.802 19:22:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=64930 00:06:13.802 19:22:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:13.802 19:22:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:13.802 19:22:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 64930 00:06:13.802 19:22:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 64930 ']' 00:06:13.802 19:22:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.802 19:22:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.802 19:22:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.802 19:22:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.802 19:22:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:06:14.733 19:22:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:26.936 Initializing NVMe Controllers 00:06:26.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:26.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:26.936 Initialization complete. Launching workers. 00:06:26.936 ======================================================== 00:06:26.936 Latency(us) 00:06:26.936 Device Information : IOPS MiB/s Average min max 00:06:26.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14635.93 57.17 4372.37 692.45 23176.37 00:06:26.936 ======================================================== 00:06:26.936 Total : 14635.93 57.17 4372.37 692.45 23176.37 00:06:26.936 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:26.936 rmmod nvme_tcp 00:06:26.936 rmmod nvme_fabrics 00:06:26.936 rmmod nvme_keyring 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 64930 ']' 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 64930 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 64930 ']' 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 64930 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64930 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:26.936 killing process with pid 64930 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64930' 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 64930 00:06:26.936 19:22:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 64930 00:06:26.936 nvmf threads initialize successfully 00:06:26.936 bdev subsystem init successfully 00:06:26.936 created a nvmf target service 00:06:26.936 create targets's poll groups done 00:06:26.936 all subsystems of target started 00:06:26.936 nvmf target is running 00:06:26.936 all subsystems of target stopped 00:06:26.936 destroy targets's poll groups done 00:06:26.936 destroyed the nvmf target service 00:06:26.936 bdev subsystem finish successfully 00:06:26.936 nvmf threads destroy successfully 00:06:26.936 19:22:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:26.936 19:22:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:26.936 19:22:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:26.936 19:22:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:26.936 19:22:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:26.936 19:22:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.936 19:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:26.936 19:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.936 19:22:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:26.936 19:22:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:26.936 19:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:26.936 19:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:26.936 ************************************ 00:06:26.936 END TEST nvmf_example 00:06:26.936 ************************************ 00:06:26.936 00:06:26.936 real 0m12.213s 00:06:26.936 user 0m44.217s 00:06:26.936 sys 0m1.860s 00:06:26.936 19:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.936 19:22:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:26.936 19:22:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:26.936 19:22:15 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:26.936 19:22:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:26.936 19:22:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.936 19:22:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.936 ************************************ 00:06:26.936 START TEST nvmf_filesystem 00:06:26.936 ************************************ 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:26.936 * Looking for test storage... 00:06:26.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:26.936 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:26.937 #define SPDK_CONFIG_H 00:06:26.937 #define SPDK_CONFIG_APPS 1 00:06:26.937 #define SPDK_CONFIG_ARCH native 00:06:26.937 #undef SPDK_CONFIG_ASAN 00:06:26.937 #define SPDK_CONFIG_AVAHI 1 00:06:26.937 #undef SPDK_CONFIG_CET 00:06:26.937 #define SPDK_CONFIG_COVERAGE 1 00:06:26.937 #define SPDK_CONFIG_CROSS_PREFIX 00:06:26.937 #undef SPDK_CONFIG_CRYPTO 00:06:26.937 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:26.937 #undef SPDK_CONFIG_CUSTOMOCF 00:06:26.937 #undef SPDK_CONFIG_DAOS 00:06:26.937 #define SPDK_CONFIG_DAOS_DIR 00:06:26.937 #define SPDK_CONFIG_DEBUG 1 00:06:26.937 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:26.937 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:26.937 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:26.937 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:26.937 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:26.937 #undef SPDK_CONFIG_DPDK_UADK 00:06:26.937 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:26.937 #define SPDK_CONFIG_EXAMPLES 1 00:06:26.937 #undef SPDK_CONFIG_FC 00:06:26.937 #define SPDK_CONFIG_FC_PATH 00:06:26.937 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:26.937 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:26.937 #undef SPDK_CONFIG_FUSE 00:06:26.937 #undef SPDK_CONFIG_FUZZER 00:06:26.937 #define SPDK_CONFIG_FUZZER_LIB 00:06:26.937 #define SPDK_CONFIG_GOLANG 1 00:06:26.937 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:26.937 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:26.937 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:26.937 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:26.937 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:26.937 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:26.937 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:26.937 #define SPDK_CONFIG_IDXD 1 00:06:26.937 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:26.937 #undef SPDK_CONFIG_IPSEC_MB 00:06:26.937 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:26.937 #define SPDK_CONFIG_ISAL 1 00:06:26.937 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:26.937 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:26.937 #define SPDK_CONFIG_LIBDIR 00:06:26.937 #undef SPDK_CONFIG_LTO 00:06:26.937 #define SPDK_CONFIG_MAX_LCORES 128 00:06:26.937 #define SPDK_CONFIG_NVME_CUSE 1 00:06:26.937 #undef SPDK_CONFIG_OCF 00:06:26.937 #define SPDK_CONFIG_OCF_PATH 00:06:26.937 #define SPDK_CONFIG_OPENSSL_PATH 00:06:26.937 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:26.937 #define SPDK_CONFIG_PGO_DIR 00:06:26.937 #undef SPDK_CONFIG_PGO_USE 00:06:26.937 #define SPDK_CONFIG_PREFIX /usr/local 00:06:26.937 #undef SPDK_CONFIG_RAID5F 00:06:26.937 #undef SPDK_CONFIG_RBD 00:06:26.937 #define SPDK_CONFIG_RDMA 1 00:06:26.937 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:26.937 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:26.937 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:26.937 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:26.937 #define SPDK_CONFIG_SHARED 1 00:06:26.937 #undef SPDK_CONFIG_SMA 00:06:26.937 #define SPDK_CONFIG_TESTS 1 00:06:26.937 #undef SPDK_CONFIG_TSAN 00:06:26.937 #define SPDK_CONFIG_UBLK 1 00:06:26.937 #define SPDK_CONFIG_UBSAN 1 00:06:26.937 #undef SPDK_CONFIG_UNIT_TESTS 00:06:26.937 #undef SPDK_CONFIG_URING 00:06:26.937 #define SPDK_CONFIG_URING_PATH 00:06:26.937 #undef SPDK_CONFIG_URING_ZNS 00:06:26.937 #define SPDK_CONFIG_USDT 1 00:06:26.937 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:26.937 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:26.937 #undef SPDK_CONFIG_VFIO_USER 00:06:26.937 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:26.937 #define SPDK_CONFIG_VHOST 1 00:06:26.937 #define SPDK_CONFIG_VIRTIO 1 00:06:26.937 #undef SPDK_CONFIG_VTUNE 00:06:26.937 #define SPDK_CONFIG_VTUNE_DIR 00:06:26.937 #define SPDK_CONFIG_WERROR 1 00:06:26.937 #define SPDK_CONFIG_WPDK_DIR 00:06:26.937 #undef SPDK_CONFIG_XNVME 00:06:26.937 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.937 19:22:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:26.938 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 65187 ]] 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 65187 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.gf54xw 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.gf54xw/tests/target /tmp/spdk.gf54xw 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264512512 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494353408 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:06:26.939 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13785415680 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5244600320 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13785415680 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5244600320 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267756544 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=135168 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=95395700736 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4307079168 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:26.940 * Looking for test storage... 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13785415680 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:26.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.940 19:22:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:26.941 Cannot find device "nvmf_tgt_br" 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:26.941 Cannot find device "nvmf_tgt_br2" 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:26.941 Cannot find device "nvmf_tgt_br" 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:26.941 Cannot find device "nvmf_tgt_br2" 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:26.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:26.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:26.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:26.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:06:26.941 00:06:26.941 --- 10.0.0.2 ping statistics --- 00:06:26.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.941 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:26.941 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:26.941 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:06:26.941 00:06:26.941 --- 10.0.0.3 ping statistics --- 00:06:26.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.941 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:26.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:26.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:06:26.941 00:06:26.941 --- 10.0.0.1 ping statistics --- 00:06:26.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.941 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.941 ************************************ 00:06:26.941 START TEST nvmf_filesystem_no_in_capsule 00:06:26.941 ************************************ 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:26.941 19:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:26.942 19:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:26.942 19:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:26.942 19:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65342 00:06:26.942 19:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:26.942 19:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65342 00:06:26.942 19:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65342 ']' 00:06:26.942 19:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.942 19:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.942 19:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.942 19:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.942 19:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:26.942 [2024-07-15 19:22:15.813517] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:26.942 [2024-07-15 19:22:15.813621] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:26.942 [2024-07-15 19:22:15.954625] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.942 [2024-07-15 19:22:16.049283] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:26.942 [2024-07-15 19:22:16.049351] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:26.942 [2024-07-15 19:22:16.049380] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:26.942 [2024-07-15 19:22:16.049391] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:26.942 [2024-07-15 19:22:16.049400] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:26.942 [2024-07-15 19:22:16.049536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.942 [2024-07-15 19:22:16.049636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.942 [2024-07-15 19:22:16.050336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.942 [2024-07-15 19:22:16.050350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.201 [2024-07-15 19:22:16.821465] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.201 Malloc1 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.201 [2024-07-15 19:22:16.948102] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:27.201 { 00:06:27.201 "aliases": [ 00:06:27.201 "f2035da1-98c1-44e5-95ba-ee85bce10ae9" 00:06:27.201 ], 00:06:27.201 "assigned_rate_limits": { 00:06:27.201 "r_mbytes_per_sec": 0, 00:06:27.201 "rw_ios_per_sec": 0, 00:06:27.201 "rw_mbytes_per_sec": 0, 00:06:27.201 "w_mbytes_per_sec": 0 00:06:27.201 }, 00:06:27.201 "block_size": 512, 00:06:27.201 "claim_type": "exclusive_write", 00:06:27.201 "claimed": true, 00:06:27.201 "driver_specific": {}, 00:06:27.201 "memory_domains": [ 00:06:27.201 { 00:06:27.201 "dma_device_id": "system", 00:06:27.201 "dma_device_type": 1 00:06:27.201 }, 00:06:27.201 { 00:06:27.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.201 "dma_device_type": 2 00:06:27.201 } 00:06:27.201 ], 00:06:27.201 "name": "Malloc1", 00:06:27.201 "num_blocks": 1048576, 00:06:27.201 "product_name": "Malloc disk", 00:06:27.201 "supported_io_types": { 00:06:27.201 "abort": true, 00:06:27.201 "compare": false, 00:06:27.201 "compare_and_write": false, 00:06:27.201 "copy": true, 00:06:27.201 "flush": true, 00:06:27.201 "get_zone_info": false, 00:06:27.201 "nvme_admin": false, 00:06:27.201 "nvme_io": false, 00:06:27.201 "nvme_io_md": false, 00:06:27.201 "nvme_iov_md": false, 00:06:27.201 "read": true, 00:06:27.201 "reset": true, 00:06:27.201 "seek_data": false, 00:06:27.201 "seek_hole": false, 00:06:27.201 "unmap": true, 00:06:27.201 "write": true, 00:06:27.201 "write_zeroes": true, 00:06:27.201 "zcopy": true, 00:06:27.201 "zone_append": false, 00:06:27.201 "zone_management": false 00:06:27.201 }, 00:06:27.201 "uuid": "f2035da1-98c1-44e5-95ba-ee85bce10ae9", 00:06:27.201 "zoned": false 00:06:27.201 } 00:06:27.201 ]' 00:06:27.201 19:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:27.516 19:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:27.517 19:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:27.517 19:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:27.517 19:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:27.517 19:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:27.517 19:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:27.517 19:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:27.517 19:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:27.517 19:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:27.517 19:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:27.517 19:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:27.517 19:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:30.046 19:22:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:30.046 19:22:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:30.046 19:22:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:30.046 19:22:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:30.046 19:22:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:30.046 19:22:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:30.046 19:22:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:30.046 19:22:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:30.046 19:22:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:30.046 19:22:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:30.046 19:22:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:30.046 19:22:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:30.046 19:22:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:30.046 19:22:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:30.046 19:22:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:30.046 19:22:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:30.046 19:22:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:30.046 19:22:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:30.046 19:22:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:30.980 ************************************ 00:06:30.980 START TEST filesystem_ext4 00:06:30.980 ************************************ 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:30.980 mke2fs 1.46.5 (30-Dec-2021) 00:06:30.980 Discarding device blocks: 0/522240 done 00:06:30.980 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:30.980 Filesystem UUID: 3859bbc7-7d0b-4986-8997-f6512f249e50 00:06:30.980 Superblock backups stored on blocks: 00:06:30.980 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:30.980 00:06:30.980 Allocating group tables: 0/64 done 00:06:30.980 Writing inode tables: 0/64 done 00:06:30.980 Creating journal (8192 blocks): done 00:06:30.980 Writing superblocks and filesystem accounting information: 0/64 done 00:06:30.980 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:30.980 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 65342 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:31.239 00:06:31.239 real 0m0.330s 00:06:31.239 user 0m0.017s 00:06:31.239 sys 0m0.055s 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:31.239 ************************************ 00:06:31.239 END TEST filesystem_ext4 00:06:31.239 ************************************ 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.239 ************************************ 00:06:31.239 START TEST filesystem_btrfs 00:06:31.239 ************************************ 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:31.239 btrfs-progs v6.6.2 00:06:31.239 See https://btrfs.readthedocs.io for more information. 00:06:31.239 00:06:31.239 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:31.239 NOTE: several default settings have changed in version 5.15, please make sure 00:06:31.239 this does not affect your deployments: 00:06:31.239 - DUP for metadata (-m dup) 00:06:31.239 - enabled no-holes (-O no-holes) 00:06:31.239 - enabled free-space-tree (-R free-space-tree) 00:06:31.239 00:06:31.239 Label: (null) 00:06:31.239 UUID: f01930dc-1926-4a9f-bff8-7967214b92d5 00:06:31.239 Node size: 16384 00:06:31.239 Sector size: 4096 00:06:31.239 Filesystem size: 510.00MiB 00:06:31.239 Block group profiles: 00:06:31.239 Data: single 8.00MiB 00:06:31.239 Metadata: DUP 32.00MiB 00:06:31.239 System: DUP 8.00MiB 00:06:31.239 SSD detected: yes 00:06:31.239 Zoned device: no 00:06:31.239 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:31.239 Runtime features: free-space-tree 00:06:31.239 Checksum: crc32c 00:06:31.239 Number of devices: 1 00:06:31.239 Devices: 00:06:31.239 ID SIZE PATH 00:06:31.239 1 510.00MiB /dev/nvme0n1p1 00:06:31.239 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 65342 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:31.239 19:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:31.239 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:31.239 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:31.239 ************************************ 00:06:31.239 END TEST filesystem_btrfs 00:06:31.239 ************************************ 00:06:31.239 00:06:31.239 real 0m0.166s 00:06:31.239 user 0m0.020s 00:06:31.239 sys 0m0.054s 00:06:31.239 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.239 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:31.498 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:31.498 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:31.498 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:31.498 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.498 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.498 ************************************ 00:06:31.498 START TEST filesystem_xfs 00:06:31.498 ************************************ 00:06:31.498 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:31.498 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:31.498 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:31.498 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:31.498 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:31.498 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:31.498 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:31.498 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:31.498 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:31.498 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:31.498 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:31.498 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:31.498 = sectsz=512 attr=2, projid32bit=1 00:06:31.498 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:31.498 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:31.498 data = bsize=4096 blocks=130560, imaxpct=25 00:06:31.498 = sunit=0 swidth=0 blks 00:06:31.498 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:31.498 log =internal log bsize=4096 blocks=16384, version=2 00:06:31.498 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:31.498 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:32.064 Discarding blocks...Done. 00:06:32.064 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:32.064 19:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:34.680 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:34.680 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:34.680 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:34.680 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:34.680 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 65342 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:34.681 ************************************ 00:06:34.681 END TEST filesystem_xfs 00:06:34.681 ************************************ 00:06:34.681 00:06:34.681 real 0m3.138s 00:06:34.681 user 0m0.017s 00:06:34.681 sys 0m0.055s 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:34.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 65342 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65342 ']' 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65342 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65342 00:06:34.681 killing process with pid 65342 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65342' 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 65342 00:06:34.681 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 65342 00:06:34.939 ************************************ 00:06:34.939 END TEST nvmf_filesystem_no_in_capsule 00:06:34.939 ************************************ 00:06:34.939 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:34.939 00:06:34.939 real 0m8.875s 00:06:34.939 user 0m33.469s 00:06:34.939 sys 0m1.505s 00:06:34.939 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.939 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.939 19:22:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:34.939 19:22:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:34.939 19:22:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:34.939 19:22:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.939 19:22:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.939 ************************************ 00:06:34.939 START TEST nvmf_filesystem_in_capsule 00:06:34.939 ************************************ 00:06:34.939 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:34.939 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:34.939 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:34.939 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:34.939 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:34.940 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.940 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65654 00:06:34.940 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65654 00:06:34.940 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65654 ']' 00:06:34.940 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:34.940 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.940 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.940 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.940 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.940 19:22:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.940 [2024-07-15 19:22:24.740636] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:34.940 [2024-07-15 19:22:24.740739] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.198 [2024-07-15 19:22:24.887937] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.198 [2024-07-15 19:22:24.950502] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:35.198 [2024-07-15 19:22:24.950572] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:35.198 [2024-07-15 19:22:24.950585] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:35.198 [2024-07-15 19:22:24.950593] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:35.198 [2024-07-15 19:22:24.950600] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:35.198 [2024-07-15 19:22:24.950667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.198 [2024-07-15 19:22:24.950773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.198 [2024-07-15 19:22:24.950832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.198 [2024-07-15 19:22:24.950835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.134 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.134 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:36.134 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:36.134 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:36.134 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.134 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:36.134 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:36.134 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:36.134 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.134 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.134 [2024-07-15 19:22:25.845652] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.134 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.134 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:36.134 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.134 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.393 Malloc1 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.394 [2024-07-15 19:22:25.978112] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.394 19:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.394 19:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.394 19:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:36.394 { 00:06:36.394 "aliases": [ 00:06:36.394 "0ff92fae-8880-412c-9372-e9ce137b6641" 00:06:36.394 ], 00:06:36.394 "assigned_rate_limits": { 00:06:36.394 "r_mbytes_per_sec": 0, 00:06:36.394 "rw_ios_per_sec": 0, 00:06:36.394 "rw_mbytes_per_sec": 0, 00:06:36.394 "w_mbytes_per_sec": 0 00:06:36.394 }, 00:06:36.394 "block_size": 512, 00:06:36.394 "claim_type": "exclusive_write", 00:06:36.394 "claimed": true, 00:06:36.394 "driver_specific": {}, 00:06:36.394 "memory_domains": [ 00:06:36.394 { 00:06:36.394 "dma_device_id": "system", 00:06:36.394 "dma_device_type": 1 00:06:36.394 }, 00:06:36.394 { 00:06:36.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.394 "dma_device_type": 2 00:06:36.394 } 00:06:36.394 ], 00:06:36.394 "name": "Malloc1", 00:06:36.394 "num_blocks": 1048576, 00:06:36.394 "product_name": "Malloc disk", 00:06:36.394 "supported_io_types": { 00:06:36.394 "abort": true, 00:06:36.394 "compare": false, 00:06:36.394 "compare_and_write": false, 00:06:36.394 "copy": true, 00:06:36.394 "flush": true, 00:06:36.394 "get_zone_info": false, 00:06:36.394 "nvme_admin": false, 00:06:36.394 "nvme_io": false, 00:06:36.394 "nvme_io_md": false, 00:06:36.394 "nvme_iov_md": false, 00:06:36.394 "read": true, 00:06:36.394 "reset": true, 00:06:36.394 "seek_data": false, 00:06:36.394 "seek_hole": false, 00:06:36.394 "unmap": true, 00:06:36.394 "write": true, 00:06:36.394 "write_zeroes": true, 00:06:36.394 "zcopy": true, 00:06:36.394 "zone_append": false, 00:06:36.394 "zone_management": false 00:06:36.394 }, 00:06:36.394 "uuid": "0ff92fae-8880-412c-9372-e9ce137b6641", 00:06:36.394 "zoned": false 00:06:36.394 } 00:06:36.394 ]' 00:06:36.394 19:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:36.394 19:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:36.394 19:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:36.394 19:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:36.394 19:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:36.394 19:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:36.394 19:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:36.394 19:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:36.651 19:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:36.651 19:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:36.651 19:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:36.652 19:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:36.652 19:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:38.551 19:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:38.551 19:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:38.551 19:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:38.551 19:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:38.551 19:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:38.551 19:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:38.551 19:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:38.551 19:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:38.551 19:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:38.551 19:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:38.551 19:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:38.551 19:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:38.551 19:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:38.551 19:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:38.551 19:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:38.551 19:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:38.551 19:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:38.809 19:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:38.809 19:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:39.745 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:39.745 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:39.745 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:39.745 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.745 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:39.745 ************************************ 00:06:39.745 START TEST filesystem_in_capsule_ext4 00:06:39.745 ************************************ 00:06:39.745 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:39.745 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:39.745 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:39.745 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:39.745 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:39.745 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:39.745 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:39.745 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:39.745 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:39.745 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:39.745 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:39.745 mke2fs 1.46.5 (30-Dec-2021) 00:06:39.745 Discarding device blocks: 0/522240 done 00:06:39.745 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:39.745 Filesystem UUID: 75c8c495-20cd-403e-958a-cf49611f570d 00:06:39.745 Superblock backups stored on blocks: 00:06:39.745 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:39.745 00:06:39.745 Allocating group tables: 0/64 done 00:06:39.745 Writing inode tables: 0/64 done 00:06:39.745 Creating journal (8192 blocks): done 00:06:39.745 Writing superblocks and filesystem accounting information: 0/64 done 00:06:39.745 00:06:39.745 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:39.745 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:40.003 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:40.003 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:40.003 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:40.003 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:40.003 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:40.003 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:40.003 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 65654 00:06:40.003 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:40.003 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:40.003 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:40.003 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:40.003 00:06:40.003 real 0m0.331s 00:06:40.003 user 0m0.018s 00:06:40.003 sys 0m0.048s 00:06:40.003 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.003 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:40.003 ************************************ 00:06:40.003 END TEST filesystem_in_capsule_ext4 00:06:40.003 ************************************ 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.261 ************************************ 00:06:40.261 START TEST filesystem_in_capsule_btrfs 00:06:40.261 ************************************ 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:40.261 btrfs-progs v6.6.2 00:06:40.261 See https://btrfs.readthedocs.io for more information. 00:06:40.261 00:06:40.261 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:40.261 NOTE: several default settings have changed in version 5.15, please make sure 00:06:40.261 this does not affect your deployments: 00:06:40.261 - DUP for metadata (-m dup) 00:06:40.261 - enabled no-holes (-O no-holes) 00:06:40.261 - enabled free-space-tree (-R free-space-tree) 00:06:40.261 00:06:40.261 Label: (null) 00:06:40.261 UUID: 5cbdc6f5-d156-4f14-a737-845170f2ff3c 00:06:40.261 Node size: 16384 00:06:40.261 Sector size: 4096 00:06:40.261 Filesystem size: 510.00MiB 00:06:40.261 Block group profiles: 00:06:40.261 Data: single 8.00MiB 00:06:40.261 Metadata: DUP 32.00MiB 00:06:40.261 System: DUP 8.00MiB 00:06:40.261 SSD detected: yes 00:06:40.261 Zoned device: no 00:06:40.261 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:40.261 Runtime features: free-space-tree 00:06:40.261 Checksum: crc32c 00:06:40.261 Number of devices: 1 00:06:40.261 Devices: 00:06:40.261 ID SIZE PATH 00:06:40.261 1 510.00MiB /dev/nvme0n1p1 00:06:40.261 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 65654 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:40.261 ************************************ 00:06:40.261 END TEST filesystem_in_capsule_btrfs 00:06:40.261 ************************************ 00:06:40.261 00:06:40.261 real 0m0.167s 00:06:40.261 user 0m0.022s 00:06:40.261 sys 0m0.059s 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.261 19:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:40.261 19:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:40.261 19:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:40.261 19:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:40.261 19:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.261 19:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.261 ************************************ 00:06:40.261 START TEST filesystem_in_capsule_xfs 00:06:40.261 ************************************ 00:06:40.261 19:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:40.262 19:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:40.262 19:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:40.262 19:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:40.262 19:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:40.262 19:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:40.262 19:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:40.262 19:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:40.262 19:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:40.262 19:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:40.262 19:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:40.517 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:40.517 = sectsz=512 attr=2, projid32bit=1 00:06:40.517 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:40.517 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:40.517 data = bsize=4096 blocks=130560, imaxpct=25 00:06:40.517 = sunit=0 swidth=0 blks 00:06:40.518 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:40.518 log =internal log bsize=4096 blocks=16384, version=2 00:06:40.518 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:40.518 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:41.081 Discarding blocks...Done. 00:06:41.081 19:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:41.081 19:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 65654 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:42.981 ************************************ 00:06:42.981 END TEST filesystem_in_capsule_xfs 00:06:42.981 ************************************ 00:06:42.981 00:06:42.981 real 0m2.589s 00:06:42.981 user 0m0.025s 00:06:42.981 sys 0m0.053s 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:42.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:42.981 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:42.982 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:42.982 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.982 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:42.982 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.982 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:42.982 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 65654 00:06:42.982 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65654 ']' 00:06:42.982 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65654 00:06:42.982 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:42.982 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.982 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65654 00:06:43.240 killing process with pid 65654 00:06:43.240 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.240 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.240 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65654' 00:06:43.240 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 65654 00:06:43.240 19:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 65654 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:43.503 00:06:43.503 real 0m8.376s 00:06:43.503 user 0m31.749s 00:06:43.503 sys 0m1.467s 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:43.503 ************************************ 00:06:43.503 END TEST nvmf_filesystem_in_capsule 00:06:43.503 ************************************ 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:43.503 rmmod nvme_tcp 00:06:43.503 rmmod nvme_fabrics 00:06:43.503 rmmod nvme_keyring 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:43.503 ************************************ 00:06:43.503 END TEST nvmf_filesystem 00:06:43.503 ************************************ 00:06:43.503 00:06:43.503 real 0m18.084s 00:06:43.503 user 1m5.480s 00:06:43.503 sys 0m3.330s 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.503 19:22:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:43.503 19:22:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:43.503 19:22:33 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:43.503 19:22:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:43.503 19:22:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.503 19:22:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:43.503 ************************************ 00:06:43.503 START TEST nvmf_target_discovery 00:06:43.503 ************************************ 00:06:43.503 19:22:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:43.765 * Looking for test storage... 00:06:43.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:43.765 Cannot find device "nvmf_tgt_br" 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:43.765 Cannot find device "nvmf_tgt_br2" 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:43.765 Cannot find device "nvmf_tgt_br" 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:43.765 Cannot find device "nvmf_tgt_br2" 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:43.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:43.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:43.765 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:44.038 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:44.038 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:44.038 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:44.038 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:44.038 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:44.038 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:44.038 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:44.038 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:44.038 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:44.038 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:44.038 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:44.038 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:44.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:44.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:06:44.039 00:06:44.039 --- 10.0.0.2 ping statistics --- 00:06:44.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.039 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:44.039 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:44.039 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:06:44.039 00:06:44.039 --- 10.0.0.3 ping statistics --- 00:06:44.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.039 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:44.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:44.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:06:44.039 00:06:44.039 --- 10.0.0.1 ping statistics --- 00:06:44.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.039 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=66103 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 66103 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 66103 ']' 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.039 19:22:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:44.039 [2024-07-15 19:22:33.764235] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:44.039 [2024-07-15 19:22:33.764852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.296 [2024-07-15 19:22:33.903768] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:44.296 [2024-07-15 19:22:33.976551] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:44.296 [2024-07-15 19:22:33.976613] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:44.296 [2024-07-15 19:22:33.976627] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:44.296 [2024-07-15 19:22:33.976637] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:44.296 [2024-07-15 19:22:33.976646] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:44.296 [2024-07-15 19:22:33.976810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.296 [2024-07-15 19:22:33.976956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.296 [2024-07-15 19:22:33.977568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.296 [2024-07-15 19:22:33.977577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.226 [2024-07-15 19:22:34.790764] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.226 Null1 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:45.226 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.227 [2024-07-15 19:22:34.846402] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.227 Null2 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.227 Null3 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.227 Null4 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.227 19:22:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -a 10.0.0.2 -s 4420 00:06:45.227 00:06:45.227 Discovery Log Number of Records 6, Generation counter 6 00:06:45.227 =====Discovery Log Entry 0====== 00:06:45.227 trtype: tcp 00:06:45.227 adrfam: ipv4 00:06:45.227 subtype: current discovery subsystem 00:06:45.227 treq: not required 00:06:45.227 portid: 0 00:06:45.227 trsvcid: 4420 00:06:45.227 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:45.227 traddr: 10.0.0.2 00:06:45.227 eflags: explicit discovery connections, duplicate discovery information 00:06:45.227 sectype: none 00:06:45.227 =====Discovery Log Entry 1====== 00:06:45.227 trtype: tcp 00:06:45.227 adrfam: ipv4 00:06:45.227 subtype: nvme subsystem 00:06:45.227 treq: not required 00:06:45.227 portid: 0 00:06:45.227 trsvcid: 4420 00:06:45.227 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:45.227 traddr: 10.0.0.2 00:06:45.227 eflags: none 00:06:45.227 sectype: none 00:06:45.227 =====Discovery Log Entry 2====== 00:06:45.227 trtype: tcp 00:06:45.227 adrfam: ipv4 00:06:45.227 subtype: nvme subsystem 00:06:45.227 treq: not required 00:06:45.227 portid: 0 00:06:45.227 trsvcid: 4420 00:06:45.227 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:45.227 traddr: 10.0.0.2 00:06:45.227 eflags: none 00:06:45.227 sectype: none 00:06:45.227 =====Discovery Log Entry 3====== 00:06:45.227 trtype: tcp 00:06:45.227 adrfam: ipv4 00:06:45.227 subtype: nvme subsystem 00:06:45.227 treq: not required 00:06:45.227 portid: 0 00:06:45.227 trsvcid: 4420 00:06:45.227 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:45.227 traddr: 10.0.0.2 00:06:45.227 eflags: none 00:06:45.227 sectype: none 00:06:45.227 =====Discovery Log Entry 4====== 00:06:45.227 trtype: tcp 00:06:45.227 adrfam: ipv4 00:06:45.227 subtype: nvme subsystem 00:06:45.227 treq: not required 00:06:45.227 portid: 0 00:06:45.227 trsvcid: 4420 00:06:45.227 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:45.227 traddr: 10.0.0.2 00:06:45.227 eflags: none 00:06:45.227 sectype: none 00:06:45.227 =====Discovery Log Entry 5====== 00:06:45.227 trtype: tcp 00:06:45.227 adrfam: ipv4 00:06:45.227 subtype: discovery subsystem referral 00:06:45.227 treq: not required 00:06:45.227 portid: 0 00:06:45.227 trsvcid: 4430 00:06:45.227 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:45.227 traddr: 10.0.0.2 00:06:45.227 eflags: none 00:06:45.227 sectype: none 00:06:45.227 Perform nvmf subsystem discovery via RPC 00:06:45.227 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:45.227 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:45.227 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.227 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.485 [ 00:06:45.485 { 00:06:45.485 "allow_any_host": true, 00:06:45.485 "hosts": [], 00:06:45.485 "listen_addresses": [ 00:06:45.485 { 00:06:45.485 "adrfam": "IPv4", 00:06:45.485 "traddr": "10.0.0.2", 00:06:45.485 "trsvcid": "4420", 00:06:45.485 "trtype": "TCP" 00:06:45.485 } 00:06:45.485 ], 00:06:45.485 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:45.485 "subtype": "Discovery" 00:06:45.485 }, 00:06:45.485 { 00:06:45.485 "allow_any_host": true, 00:06:45.485 "hosts": [], 00:06:45.485 "listen_addresses": [ 00:06:45.485 { 00:06:45.485 "adrfam": "IPv4", 00:06:45.485 "traddr": "10.0.0.2", 00:06:45.485 "trsvcid": "4420", 00:06:45.485 "trtype": "TCP" 00:06:45.485 } 00:06:45.485 ], 00:06:45.485 "max_cntlid": 65519, 00:06:45.485 "max_namespaces": 32, 00:06:45.485 "min_cntlid": 1, 00:06:45.485 "model_number": "SPDK bdev Controller", 00:06:45.485 "namespaces": [ 00:06:45.485 { 00:06:45.485 "bdev_name": "Null1", 00:06:45.485 "name": "Null1", 00:06:45.485 "nguid": "DEF87FEACC4C414AA5C3D32932FC54ED", 00:06:45.485 "nsid": 1, 00:06:45.485 "uuid": "def87fea-cc4c-414a-a5c3-d32932fc54ed" 00:06:45.485 } 00:06:45.485 ], 00:06:45.485 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:45.485 "serial_number": "SPDK00000000000001", 00:06:45.485 "subtype": "NVMe" 00:06:45.485 }, 00:06:45.485 { 00:06:45.486 "allow_any_host": true, 00:06:45.486 "hosts": [], 00:06:45.486 "listen_addresses": [ 00:06:45.486 { 00:06:45.486 "adrfam": "IPv4", 00:06:45.486 "traddr": "10.0.0.2", 00:06:45.486 "trsvcid": "4420", 00:06:45.486 "trtype": "TCP" 00:06:45.486 } 00:06:45.486 ], 00:06:45.486 "max_cntlid": 65519, 00:06:45.486 "max_namespaces": 32, 00:06:45.486 "min_cntlid": 1, 00:06:45.486 "model_number": "SPDK bdev Controller", 00:06:45.486 "namespaces": [ 00:06:45.486 { 00:06:45.486 "bdev_name": "Null2", 00:06:45.486 "name": "Null2", 00:06:45.486 "nguid": "17AF9D596E3D49FC80AA6C61E32C85D7", 00:06:45.486 "nsid": 1, 00:06:45.486 "uuid": "17af9d59-6e3d-49fc-80aa-6c61e32c85d7" 00:06:45.486 } 00:06:45.486 ], 00:06:45.486 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:45.486 "serial_number": "SPDK00000000000002", 00:06:45.486 "subtype": "NVMe" 00:06:45.486 }, 00:06:45.486 { 00:06:45.486 "allow_any_host": true, 00:06:45.486 "hosts": [], 00:06:45.486 "listen_addresses": [ 00:06:45.486 { 00:06:45.486 "adrfam": "IPv4", 00:06:45.486 "traddr": "10.0.0.2", 00:06:45.486 "trsvcid": "4420", 00:06:45.486 "trtype": "TCP" 00:06:45.486 } 00:06:45.486 ], 00:06:45.486 "max_cntlid": 65519, 00:06:45.486 "max_namespaces": 32, 00:06:45.486 "min_cntlid": 1, 00:06:45.486 "model_number": "SPDK bdev Controller", 00:06:45.486 "namespaces": [ 00:06:45.486 { 00:06:45.486 "bdev_name": "Null3", 00:06:45.486 "name": "Null3", 00:06:45.486 "nguid": "3346398C81EF4BA99D682DBBFC166FD1", 00:06:45.486 "nsid": 1, 00:06:45.486 "uuid": "3346398c-81ef-4ba9-9d68-2dbbfc166fd1" 00:06:45.486 } 00:06:45.486 ], 00:06:45.486 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:45.486 "serial_number": "SPDK00000000000003", 00:06:45.486 "subtype": "NVMe" 00:06:45.486 }, 00:06:45.486 { 00:06:45.486 "allow_any_host": true, 00:06:45.486 "hosts": [], 00:06:45.486 "listen_addresses": [ 00:06:45.486 { 00:06:45.486 "adrfam": "IPv4", 00:06:45.486 "traddr": "10.0.0.2", 00:06:45.486 "trsvcid": "4420", 00:06:45.486 "trtype": "TCP" 00:06:45.486 } 00:06:45.486 ], 00:06:45.486 "max_cntlid": 65519, 00:06:45.486 "max_namespaces": 32, 00:06:45.486 "min_cntlid": 1, 00:06:45.486 "model_number": "SPDK bdev Controller", 00:06:45.486 "namespaces": [ 00:06:45.486 { 00:06:45.486 "bdev_name": "Null4", 00:06:45.486 "name": "Null4", 00:06:45.486 "nguid": "77742877024148AFA517344146E520F1", 00:06:45.486 "nsid": 1, 00:06:45.486 "uuid": "77742877-0241-48af-a517-344146e520f1" 00:06:45.486 } 00:06:45.486 ], 00:06:45.486 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:45.486 "serial_number": "SPDK00000000000004", 00:06:45.486 "subtype": "NVMe" 00:06:45.486 } 00:06:45.486 ] 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:45.486 rmmod nvme_tcp 00:06:45.486 rmmod nvme_fabrics 00:06:45.486 rmmod nvme_keyring 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 66103 ']' 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 66103 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 66103 ']' 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 66103 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.486 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66103 00:06:45.745 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.745 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.745 killing process with pid 66103 00:06:45.745 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66103' 00:06:45.745 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 66103 00:06:45.745 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 66103 00:06:45.745 19:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:45.745 19:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:45.745 19:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:45.745 19:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:45.745 19:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:45.745 19:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.745 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:45.745 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.745 19:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:45.745 00:06:45.745 real 0m2.242s 00:06:45.745 user 0m6.237s 00:06:45.745 sys 0m0.558s 00:06:45.745 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.745 19:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.745 ************************************ 00:06:45.745 END TEST nvmf_target_discovery 00:06:45.745 ************************************ 00:06:45.745 19:22:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:45.745 19:22:35 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:45.745 19:22:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:45.745 19:22:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.745 19:22:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:45.745 ************************************ 00:06:45.745 START TEST nvmf_referrals 00:06:45.745 ************************************ 00:06:45.745 19:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:46.003 * Looking for test storage... 00:06:46.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:46.003 19:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:46.003 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:46.003 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.003 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.003 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.003 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.003 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.003 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.003 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.003 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:46.004 Cannot find device "nvmf_tgt_br" 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:46.004 Cannot find device "nvmf_tgt_br2" 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:46.004 Cannot find device "nvmf_tgt_br" 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:46.004 Cannot find device "nvmf_tgt_br2" 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:46.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:46.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:46.004 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:46.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:46.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:06:46.263 00:06:46.263 --- 10.0.0.2 ping statistics --- 00:06:46.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.263 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:46.263 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:46.263 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:06:46.263 00:06:46.263 --- 10.0.0.3 ping statistics --- 00:06:46.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.263 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:46.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:46.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:06:46.263 00:06:46.263 --- 10.0.0.1 ping statistics --- 00:06:46.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.263 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:46.263 19:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:46.263 19:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:46.263 19:22:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:46.263 19:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:46.263 19:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.263 19:22:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=66326 00:06:46.263 19:22:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 66326 00:06:46.263 19:22:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:46.263 19:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 66326 ']' 00:06:46.263 19:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.263 19:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.263 19:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.263 19:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.263 19:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.522 [2024-07-15 19:22:36.074782] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:46.522 [2024-07-15 19:22:36.074890] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.522 [2024-07-15 19:22:36.215912] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.522 [2024-07-15 19:22:36.284236] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:46.522 [2024-07-15 19:22:36.284301] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:46.522 [2024-07-15 19:22:36.284315] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.522 [2024-07-15 19:22:36.284325] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.522 [2024-07-15 19:22:36.284334] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:46.522 [2024-07-15 19:22:36.284475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.522 [2024-07-15 19:22:36.284823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.522 [2024-07-15 19:22:36.285294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.522 [2024-07-15 19:22:36.285323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.456 [2024-07-15 19:22:37.123209] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.456 [2024-07-15 19:22:37.142622] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:47.456 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:47.714 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:47.973 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:48.231 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:48.231 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:48.232 19:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:48.490 19:22:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:48.490 rmmod nvme_tcp 00:06:48.490 rmmod nvme_fabrics 00:06:48.490 rmmod nvme_keyring 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 66326 ']' 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 66326 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 66326 ']' 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 66326 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66326 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.749 killing process with pid 66326 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66326' 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 66326 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 66326 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:48.749 00:06:48.749 real 0m2.984s 00:06:48.749 user 0m9.857s 00:06:48.749 sys 0m0.784s 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.749 19:22:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:48.749 ************************************ 00:06:48.749 END TEST nvmf_referrals 00:06:48.749 ************************************ 00:06:49.008 19:22:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:49.008 19:22:38 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:49.008 19:22:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:49.008 19:22:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.008 19:22:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:49.008 ************************************ 00:06:49.008 START TEST nvmf_connect_disconnect 00:06:49.008 ************************************ 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:49.008 * Looking for test storage... 00:06:49.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:49.008 Cannot find device "nvmf_tgt_br" 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:06:49.008 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:49.008 Cannot find device "nvmf_tgt_br2" 00:06:49.009 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:06:49.009 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:49.009 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:49.009 Cannot find device "nvmf_tgt_br" 00:06:49.009 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:06:49.009 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:49.009 Cannot find device "nvmf_tgt_br2" 00:06:49.009 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:06:49.009 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:49.009 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:49.318 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:49.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:49.318 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:06:49.318 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:49.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:49.319 19:22:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:49.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:49.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:06:49.319 00:06:49.319 --- 10.0.0.2 ping statistics --- 00:06:49.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.319 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:49.319 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:49.319 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:06:49.319 00:06:49.319 --- 10.0.0.3 ping statistics --- 00:06:49.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.319 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:49.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:49.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:06:49.319 00:06:49.319 --- 10.0.0.1 ping statistics --- 00:06:49.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.319 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=66628 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 66628 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 66628 ']' 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.319 19:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:49.578 [2024-07-15 19:22:39.121318] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:49.578 [2024-07-15 19:22:39.121426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.578 [2024-07-15 19:22:39.259967] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:49.578 [2024-07-15 19:22:39.331270] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:49.578 [2024-07-15 19:22:39.331325] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:49.578 [2024-07-15 19:22:39.331338] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:49.578 [2024-07-15 19:22:39.331348] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:49.578 [2024-07-15 19:22:39.331370] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:49.578 [2024-07-15 19:22:39.331662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.578 [2024-07-15 19:22:39.331734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.578 [2024-07-15 19:22:39.332881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.578 [2024-07-15 19:22:39.332888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:50.514 [2024-07-15 19:22:40.201018] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:50.514 [2024-07-15 19:22:40.262611] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:50.514 19:22:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:53.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:54.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:57.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:00.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:01.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:01.931 rmmod nvme_tcp 00:07:01.931 rmmod nvme_fabrics 00:07:01.931 rmmod nvme_keyring 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 66628 ']' 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 66628 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 66628 ']' 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 66628 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66628 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.931 killing process with pid 66628 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66628' 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 66628 00:07:01.931 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 66628 00:07:02.216 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:02.216 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:02.216 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:02.216 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:02.216 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:02.216 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.216 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.216 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.216 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:02.216 00:07:02.216 real 0m13.202s 00:07:02.216 user 0m48.424s 00:07:02.216 sys 0m2.006s 00:07:02.216 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.216 19:22:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:02.216 ************************************ 00:07:02.216 END TEST nvmf_connect_disconnect 00:07:02.216 ************************************ 00:07:02.216 19:22:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:02.216 19:22:51 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:02.216 19:22:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:02.216 19:22:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.216 19:22:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:02.216 ************************************ 00:07:02.216 START TEST nvmf_multitarget 00:07:02.216 ************************************ 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:02.216 * Looking for test storage... 00:07:02.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.216 19:22:51 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:02.217 Cannot find device "nvmf_tgt_br" 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:02.217 Cannot find device "nvmf_tgt_br2" 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:02.217 Cannot find device "nvmf_tgt_br" 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:02.217 Cannot find device "nvmf_tgt_br2" 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:07:02.217 19:22:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:02.477 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:02.477 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:02.477 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:02.477 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:07:02.477 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:02.477 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:02.477 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:07:02.477 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:02.477 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:02.477 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:02.477 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:02.477 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:02.477 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:02.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:02.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:07:02.478 00:07:02.478 --- 10.0.0.2 ping statistics --- 00:07:02.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.478 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:02.478 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:02.478 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:07:02.478 00:07:02.478 --- 10.0.0.3 ping statistics --- 00:07:02.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.478 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:02.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:02.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:07:02.478 00:07:02.478 --- 10.0.0.1 ping statistics --- 00:07:02.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.478 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=67032 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 67032 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 67032 ']' 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.478 19:22:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:02.737 [2024-07-15 19:22:52.319279] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:07:02.737 [2024-07-15 19:22:52.319818] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.737 [2024-07-15 19:22:52.452900] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.737 [2024-07-15 19:22:52.519757] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:02.737 [2024-07-15 19:22:52.519838] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:02.737 [2024-07-15 19:22:52.519860] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:02.737 [2024-07-15 19:22:52.519876] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:02.737 [2024-07-15 19:22:52.519890] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:02.737 [2024-07-15 19:22:52.520068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.737 [2024-07-15 19:22:52.520217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.737 [2024-07-15 19:22:52.520952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.737 [2024-07-15 19:22:52.520969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.995 19:22:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.996 19:22:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:02.996 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:02.996 19:22:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:02.996 19:22:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:02.996 19:22:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:02.996 19:22:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:02.996 19:22:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:02.996 19:22:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:02.996 19:22:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:02.996 19:22:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:03.254 "nvmf_tgt_1" 00:07:03.254 19:22:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:03.254 "nvmf_tgt_2" 00:07:03.254 19:22:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:03.254 19:22:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:03.512 19:22:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:03.512 19:22:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:03.512 true 00:07:03.512 19:22:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:03.769 true 00:07:03.769 19:22:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:03.769 19:22:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:03.769 19:22:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:03.769 19:22:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:03.769 19:22:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:03.769 19:22:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:03.769 19:22:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:04.028 19:22:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:04.028 19:22:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:04.028 19:22:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:04.028 19:22:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:04.028 rmmod nvme_tcp 00:07:04.028 rmmod nvme_fabrics 00:07:04.028 rmmod nvme_keyring 00:07:04.028 19:22:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:04.028 19:22:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:04.028 19:22:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:04.028 19:22:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 67032 ']' 00:07:04.028 19:22:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 67032 00:07:04.028 19:22:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 67032 ']' 00:07:04.028 19:22:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 67032 00:07:04.028 19:22:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:04.028 19:22:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:04.028 19:22:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67032 00:07:04.028 19:22:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:04.028 19:22:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:04.028 killing process with pid 67032 00:07:04.028 19:22:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67032' 00:07:04.028 19:22:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 67032 00:07:04.028 19:22:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 67032 00:07:04.287 19:22:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:04.287 19:22:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:04.287 19:22:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:04.287 19:22:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:04.287 19:22:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:04.287 19:22:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.287 19:22:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:04.287 19:22:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.287 19:22:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:04.287 ************************************ 00:07:04.287 END TEST nvmf_multitarget 00:07:04.287 ************************************ 00:07:04.287 00:07:04.287 real 0m2.144s 00:07:04.287 user 0m6.416s 00:07:04.287 sys 0m0.576s 00:07:04.287 19:22:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.287 19:22:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:04.287 19:22:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:04.287 19:22:54 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:04.287 19:22:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:04.287 19:22:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.287 19:22:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:04.287 ************************************ 00:07:04.287 START TEST nvmf_rpc 00:07:04.287 ************************************ 00:07:04.287 19:22:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:04.546 * Looking for test storage... 00:07:04.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.546 19:22:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:04.547 Cannot find device "nvmf_tgt_br" 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:04.547 Cannot find device "nvmf_tgt_br2" 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:04.547 Cannot find device "nvmf_tgt_br" 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:04.547 Cannot find device "nvmf_tgt_br2" 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:04.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:04.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:04.547 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:04.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:07:04.805 00:07:04.805 --- 10.0.0.2 ping statistics --- 00:07:04.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.805 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:04.805 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:04.805 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:07:04.805 00:07:04.805 --- 10.0.0.3 ping statistics --- 00:07:04.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.805 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:04.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:07:04.805 00:07:04.805 --- 10.0.0.1 ping statistics --- 00:07:04.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.805 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.805 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=67244 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 67244 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 67244 ']' 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.806 19:22:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.064 [2024-07-15 19:22:54.630666] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:07:05.064 [2024-07-15 19:22:54.630744] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.064 [2024-07-15 19:22:54.771003] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.064 [2024-07-15 19:22:54.845683] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.064 [2024-07-15 19:22:54.845751] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.064 [2024-07-15 19:22:54.845764] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.064 [2024-07-15 19:22:54.845774] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.064 [2024-07-15 19:22:54.845783] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.064 [2024-07-15 19:22:54.846768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.064 [2024-07-15 19:22:54.846878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.064 [2024-07-15 19:22:54.846944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.064 [2024-07-15 19:22:54.846951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.997 19:22:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.997 19:22:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:05.997 19:22:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:05.997 19:22:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:05.997 19:22:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.997 19:22:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.997 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:05.997 19:22:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.997 19:22:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.997 19:22:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.997 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:05.997 "poll_groups": [ 00:07:05.997 { 00:07:05.997 "admin_qpairs": 0, 00:07:05.997 "completed_nvme_io": 0, 00:07:05.997 "current_admin_qpairs": 0, 00:07:05.997 "current_io_qpairs": 0, 00:07:05.997 "io_qpairs": 0, 00:07:05.997 "name": "nvmf_tgt_poll_group_000", 00:07:05.997 "pending_bdev_io": 0, 00:07:05.997 "transports": [] 00:07:05.997 }, 00:07:05.997 { 00:07:05.997 "admin_qpairs": 0, 00:07:05.997 "completed_nvme_io": 0, 00:07:05.997 "current_admin_qpairs": 0, 00:07:05.997 "current_io_qpairs": 0, 00:07:05.997 "io_qpairs": 0, 00:07:05.997 "name": "nvmf_tgt_poll_group_001", 00:07:05.997 "pending_bdev_io": 0, 00:07:05.997 "transports": [] 00:07:05.997 }, 00:07:05.997 { 00:07:05.997 "admin_qpairs": 0, 00:07:05.997 "completed_nvme_io": 0, 00:07:05.997 "current_admin_qpairs": 0, 00:07:05.997 "current_io_qpairs": 0, 00:07:05.997 "io_qpairs": 0, 00:07:05.998 "name": "nvmf_tgt_poll_group_002", 00:07:05.998 "pending_bdev_io": 0, 00:07:05.998 "transports": [] 00:07:05.998 }, 00:07:05.998 { 00:07:05.998 "admin_qpairs": 0, 00:07:05.998 "completed_nvme_io": 0, 00:07:05.998 "current_admin_qpairs": 0, 00:07:05.998 "current_io_qpairs": 0, 00:07:05.998 "io_qpairs": 0, 00:07:05.998 "name": "nvmf_tgt_poll_group_003", 00:07:05.998 "pending_bdev_io": 0, 00:07:05.998 "transports": [] 00:07:05.998 } 00:07:05.998 ], 00:07:05.998 "tick_rate": 2200000000 00:07:05.998 }' 00:07:05.998 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:05.998 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:05.998 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:05.998 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:05.998 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:05.998 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.256 [2024-07-15 19:22:55.836519] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:06.256 "poll_groups": [ 00:07:06.256 { 00:07:06.256 "admin_qpairs": 0, 00:07:06.256 "completed_nvme_io": 0, 00:07:06.256 "current_admin_qpairs": 0, 00:07:06.256 "current_io_qpairs": 0, 00:07:06.256 "io_qpairs": 0, 00:07:06.256 "name": "nvmf_tgt_poll_group_000", 00:07:06.256 "pending_bdev_io": 0, 00:07:06.256 "transports": [ 00:07:06.256 { 00:07:06.256 "trtype": "TCP" 00:07:06.256 } 00:07:06.256 ] 00:07:06.256 }, 00:07:06.256 { 00:07:06.256 "admin_qpairs": 0, 00:07:06.256 "completed_nvme_io": 0, 00:07:06.256 "current_admin_qpairs": 0, 00:07:06.256 "current_io_qpairs": 0, 00:07:06.256 "io_qpairs": 0, 00:07:06.256 "name": "nvmf_tgt_poll_group_001", 00:07:06.256 "pending_bdev_io": 0, 00:07:06.256 "transports": [ 00:07:06.256 { 00:07:06.256 "trtype": "TCP" 00:07:06.256 } 00:07:06.256 ] 00:07:06.256 }, 00:07:06.256 { 00:07:06.256 "admin_qpairs": 0, 00:07:06.256 "completed_nvme_io": 0, 00:07:06.256 "current_admin_qpairs": 0, 00:07:06.256 "current_io_qpairs": 0, 00:07:06.256 "io_qpairs": 0, 00:07:06.256 "name": "nvmf_tgt_poll_group_002", 00:07:06.256 "pending_bdev_io": 0, 00:07:06.256 "transports": [ 00:07:06.256 { 00:07:06.256 "trtype": "TCP" 00:07:06.256 } 00:07:06.256 ] 00:07:06.256 }, 00:07:06.256 { 00:07:06.256 "admin_qpairs": 0, 00:07:06.256 "completed_nvme_io": 0, 00:07:06.256 "current_admin_qpairs": 0, 00:07:06.256 "current_io_qpairs": 0, 00:07:06.256 "io_qpairs": 0, 00:07:06.256 "name": "nvmf_tgt_poll_group_003", 00:07:06.256 "pending_bdev_io": 0, 00:07:06.256 "transports": [ 00:07:06.256 { 00:07:06.256 "trtype": "TCP" 00:07:06.256 } 00:07:06.256 ] 00:07:06.256 } 00:07:06.256 ], 00:07:06.256 "tick_rate": 2200000000 00:07:06.256 }' 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.256 19:22:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.256 Malloc1 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.256 [2024-07-15 19:22:56.030518] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -a 10.0.0.2 -s 4420 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -a 10.0.0.2 -s 4420 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:06.256 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:06.257 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -a 10.0.0.2 -s 4420 00:07:06.257 [2024-07-15 19:22:56.052726] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055' 00:07:06.257 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:06.257 could not add new controller: failed to write to nvme-fabrics device 00:07:06.257 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:06.257 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:06.257 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:06.257 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:06.515 19:22:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:07:06.515 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.515 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.515 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.515 19:22:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:06.515 19:22:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:06.515 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:06.515 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:06.515 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:06.515 19:22:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:09.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:09.056 [2024-07-15 19:22:58.333937] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055' 00:07:09.056 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:09.056 could not add new controller: failed to write to nvme-fabrics device 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:09.056 19:22:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:11.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.002 [2024-07-15 19:23:00.617765] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.002 19:23:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:11.261 19:23:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:11.261 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:11.261 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:11.261 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:11.261 19:23:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:13.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.164 [2024-07-15 19:23:02.921079] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.164 19:23:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:13.423 19:23:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:13.423 19:23:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:13.423 19:23:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:13.423 19:23:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:13.423 19:23:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:15.323 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:15.323 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:15.323 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:15.323 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:15.323 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:15.323 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:15.323 19:23:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:15.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.580 [2024-07-15 19:23:05.204907] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.580 19:23:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:15.905 19:23:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:15.905 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:15.905 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:15.905 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:15.905 19:23:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:17.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.802 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.803 19:23:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:17.803 19:23:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:17.803 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.803 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.803 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.803 19:23:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:17.803 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.803 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.803 [2024-07-15 19:23:07.496023] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.803 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.803 19:23:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:17.803 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.803 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.803 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.803 19:23:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:17.803 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.803 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.803 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.803 19:23:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:18.060 19:23:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:18.060 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:18.060 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:18.060 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:18.060 19:23:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:19.957 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:19.957 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:19.957 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:19.957 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:19.957 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:19.957 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:19.957 19:23:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:19.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.957 19:23:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:19.957 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:19.957 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:19.957 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:19.957 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:19.957 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.216 [2024-07-15 19:23:09.791420] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:20.216 19:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:22.746 19:23:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:22.746 19:23:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:22.746 19:23:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:22.746 19:23:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:22.746 19:23:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:22.746 19:23:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:22.746 19:23:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:22.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 [2024-07-15 19:23:12.094749] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 [2024-07-15 19:23:12.142797] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 [2024-07-15 19:23:12.190830] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 [2024-07-15 19:23:12.239004] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.747 [2024-07-15 19:23:12.286939] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:22.747 "poll_groups": [ 00:07:22.747 { 00:07:22.747 "admin_qpairs": 2, 00:07:22.747 "completed_nvme_io": 67, 00:07:22.747 "current_admin_qpairs": 0, 00:07:22.747 "current_io_qpairs": 0, 00:07:22.747 "io_qpairs": 16, 00:07:22.747 "name": "nvmf_tgt_poll_group_000", 00:07:22.747 "pending_bdev_io": 0, 00:07:22.747 "transports": [ 00:07:22.747 { 00:07:22.747 "trtype": "TCP" 00:07:22.747 } 00:07:22.747 ] 00:07:22.747 }, 00:07:22.747 { 00:07:22.747 "admin_qpairs": 3, 00:07:22.747 "completed_nvme_io": 68, 00:07:22.747 "current_admin_qpairs": 0, 00:07:22.747 "current_io_qpairs": 0, 00:07:22.747 "io_qpairs": 17, 00:07:22.747 "name": "nvmf_tgt_poll_group_001", 00:07:22.747 "pending_bdev_io": 0, 00:07:22.747 "transports": [ 00:07:22.747 { 00:07:22.747 "trtype": "TCP" 00:07:22.747 } 00:07:22.747 ] 00:07:22.747 }, 00:07:22.747 { 00:07:22.747 "admin_qpairs": 1, 00:07:22.747 "completed_nvme_io": 118, 00:07:22.747 "current_admin_qpairs": 0, 00:07:22.747 "current_io_qpairs": 0, 00:07:22.747 "io_qpairs": 19, 00:07:22.747 "name": "nvmf_tgt_poll_group_002", 00:07:22.747 "pending_bdev_io": 0, 00:07:22.747 "transports": [ 00:07:22.747 { 00:07:22.747 "trtype": "TCP" 00:07:22.747 } 00:07:22.747 ] 00:07:22.747 }, 00:07:22.747 { 00:07:22.747 "admin_qpairs": 1, 00:07:22.747 "completed_nvme_io": 167, 00:07:22.747 "current_admin_qpairs": 0, 00:07:22.747 "current_io_qpairs": 0, 00:07:22.747 "io_qpairs": 18, 00:07:22.747 "name": "nvmf_tgt_poll_group_003", 00:07:22.747 "pending_bdev_io": 0, 00:07:22.747 "transports": [ 00:07:22.747 { 00:07:22.747 "trtype": "TCP" 00:07:22.747 } 00:07:22.747 ] 00:07:22.747 } 00:07:22.747 ], 00:07:22.747 "tick_rate": 2200000000 00:07:22.747 }' 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:22.747 rmmod nvme_tcp 00:07:22.747 rmmod nvme_fabrics 00:07:22.747 rmmod nvme_keyring 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 67244 ']' 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 67244 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 67244 ']' 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 67244 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:22.747 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.004 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67244 00:07:23.004 killing process with pid 67244 00:07:23.004 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:23.004 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:23.004 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67244' 00:07:23.004 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 67244 00:07:23.004 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 67244 00:07:23.004 19:23:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:23.004 19:23:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:23.004 19:23:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:23.004 19:23:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:23.004 19:23:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:23.004 19:23:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.004 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.004 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.004 19:23:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:23.004 00:07:23.004 real 0m18.724s 00:07:23.004 user 1m10.194s 00:07:23.004 sys 0m2.613s 00:07:23.004 ************************************ 00:07:23.004 END TEST nvmf_rpc 00:07:23.004 ************************************ 00:07:23.004 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.004 19:23:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.267 19:23:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:23.267 19:23:12 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:23.267 19:23:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:23.267 19:23:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.267 19:23:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:23.267 ************************************ 00:07:23.267 START TEST nvmf_invalid 00:07:23.267 ************************************ 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:23.267 * Looking for test storage... 00:07:23.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:23.267 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:23.268 Cannot find device "nvmf_tgt_br" 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:23.268 Cannot find device "nvmf_tgt_br2" 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:23.268 19:23:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:23.268 Cannot find device "nvmf_tgt_br" 00:07:23.268 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:07:23.268 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:23.268 Cannot find device "nvmf_tgt_br2" 00:07:23.268 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:07:23.268 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:23.268 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:23.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:23.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:23.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:07:23.534 00:07:23.534 --- 10.0.0.2 ping statistics --- 00:07:23.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.534 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:23.534 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:23.534 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:07:23.534 00:07:23.534 --- 10.0.0.3 ping statistics --- 00:07:23.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.534 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:23.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:07:23.534 00:07:23.534 --- 10.0.0.1 ping statistics --- 00:07:23.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.534 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:23.534 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:23.791 19:23:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:23.791 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:23.791 19:23:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:23.791 19:23:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:23.791 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=67761 00:07:23.791 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 67761 00:07:23.791 19:23:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 67761 ']' 00:07:23.791 19:23:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:23.791 19:23:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.791 19:23:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.791 19:23:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.791 19:23:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.791 19:23:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:23.791 [2024-07-15 19:23:13.425240] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:07:23.791 [2024-07-15 19:23:13.425333] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.791 [2024-07-15 19:23:13.563604] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.050 [2024-07-15 19:23:13.631841] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.050 [2024-07-15 19:23:13.631905] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.050 [2024-07-15 19:23:13.631918] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.050 [2024-07-15 19:23:13.631927] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.050 [2024-07-15 19:23:13.631937] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.050 [2024-07-15 19:23:13.632092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.050 [2024-07-15 19:23:13.632284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.050 [2024-07-15 19:23:13.632949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.050 [2024-07-15 19:23:13.632985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.981 19:23:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.981 19:23:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:24.981 19:23:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:24.981 19:23:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:24.981 19:23:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:24.981 19:23:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:24.981 19:23:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:24.981 19:23:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27519 00:07:24.981 [2024-07-15 19:23:14.723065] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:24.981 19:23:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/15 19:23:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27519 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:07:24.981 request: 00:07:24.981 { 00:07:24.981 "method": "nvmf_create_subsystem", 00:07:24.981 "params": { 00:07:24.981 "nqn": "nqn.2016-06.io.spdk:cnode27519", 00:07:24.981 "tgt_name": "foobar" 00:07:24.981 } 00:07:24.981 } 00:07:24.981 Got JSON-RPC error response 00:07:24.981 GoRPCClient: error on JSON-RPC call' 00:07:24.981 19:23:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/15 19:23:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27519 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:07:24.981 request: 00:07:24.981 { 00:07:24.981 "method": "nvmf_create_subsystem", 00:07:24.981 "params": { 00:07:24.981 "nqn": "nqn.2016-06.io.spdk:cnode27519", 00:07:24.981 "tgt_name": "foobar" 00:07:24.981 } 00:07:24.981 } 00:07:24.981 Got JSON-RPC error response 00:07:24.981 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:24.981 19:23:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:24.981 19:23:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode20637 00:07:25.238 [2024-07-15 19:23:14.983290] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20637: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:25.239 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/15 19:23:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode20637 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:07:25.239 request: 00:07:25.239 { 00:07:25.239 "method": "nvmf_create_subsystem", 00:07:25.239 "params": { 00:07:25.239 "nqn": "nqn.2016-06.io.spdk:cnode20637", 00:07:25.239 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:07:25.239 } 00:07:25.239 } 00:07:25.239 Got JSON-RPC error response 00:07:25.239 GoRPCClient: error on JSON-RPC call' 00:07:25.239 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/15 19:23:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode20637 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:07:25.239 request: 00:07:25.239 { 00:07:25.239 "method": "nvmf_create_subsystem", 00:07:25.239 "params": { 00:07:25.239 "nqn": "nqn.2016-06.io.spdk:cnode20637", 00:07:25.239 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:07:25.239 } 00:07:25.239 } 00:07:25.239 Got JSON-RPC error response 00:07:25.239 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:25.239 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:25.239 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode19813 00:07:25.805 [2024-07-15 19:23:15.311590] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19813: invalid model number 'SPDK_Controller' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/15 19:23:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode19813], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:07:25.805 request: 00:07:25.805 { 00:07:25.805 "method": "nvmf_create_subsystem", 00:07:25.805 "params": { 00:07:25.805 "nqn": "nqn.2016-06.io.spdk:cnode19813", 00:07:25.805 "model_number": "SPDK_Controller\u001f" 00:07:25.805 } 00:07:25.805 } 00:07:25.805 Got JSON-RPC error response 00:07:25.805 GoRPCClient: error on JSON-RPC call' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/15 19:23:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode19813], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:07:25.805 request: 00:07:25.805 { 00:07:25.805 "method": "nvmf_create_subsystem", 00:07:25.805 "params": { 00:07:25.805 "nqn": "nqn.2016-06.io.spdk:cnode19813", 00:07:25.805 "model_number": "SPDK_Controller\u001f" 00:07:25.805 } 00:07:25.805 } 00:07:25.805 Got JSON-RPC error response 00:07:25.805 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.805 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ g == \- ]] 00:07:25.806 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'g=LZyRYk,UNoA@s[#P$?' 00:07:25.806 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'g=LZyRYk,UNoA@s[#P$?' nqn.2016-06.io.spdk:cnode30312 00:07:26.064 [2024-07-15 19:23:15.695951] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30312: invalid serial number 'g=LZyRYk,UNoA@s[#P$?' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/15 19:23:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30312 serial_number:g=LZyRYk,UNoA@s[#P$?], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN g=LZyRYk,UNoA@s[#P$? 00:07:26.064 request: 00:07:26.064 { 00:07:26.064 "method": "nvmf_create_subsystem", 00:07:26.064 "params": { 00:07:26.064 "nqn": "nqn.2016-06.io.spdk:cnode30312", 00:07:26.064 "serial_number": "g=LZ\u007fyRYk,UNoA@s[#P$?" 00:07:26.064 } 00:07:26.064 } 00:07:26.064 Got JSON-RPC error response 00:07:26.064 GoRPCClient: error on JSON-RPC call' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/15 19:23:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30312 serial_number:g=LZyRYk,UNoA@s[#P$?], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN g=LZyRYk,UNoA@s[#P$? 00:07:26.064 request: 00:07:26.064 { 00:07:26.064 "method": "nvmf_create_subsystem", 00:07:26.064 "params": { 00:07:26.064 "nqn": "nqn.2016-06.io.spdk:cnode30312", 00:07:26.064 "serial_number": "g=LZ\u007fyRYk,UNoA@s[#P$?" 00:07:26.064 } 00:07:26.064 } 00:07:26.064 Got JSON-RPC error response 00:07:26.064 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.064 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:26.065 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:07:26.324 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 5 == \- ]] 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '5Z:X#&,f(MTmeuMj_M%;9SUnzO*P^Bn&.#^'\''\lxi8' 00:07:26.325 19:23:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '5Z:X#&,f(MTmeuMj_M%;9SUnzO*P^Bn&.#^'\''\lxi8' nqn.2016-06.io.spdk:cnode16100 00:07:26.584 [2024-07-15 19:23:16.164323] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16100: invalid model number '5Z:X#&,f(MTmeuMj_M%;9SUnzO*P^Bn&.#^'\lxi8' 00:07:26.584 19:23:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/07/15 19:23:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:5Z:X#&,f(MTmeuMj_M%;9SUnzO*P^Bn&.#^'\''\lxi8 nqn:nqn.2016-06.io.spdk:cnode16100], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 5Z:X#&,f(MTmeuMj_M%;9SUnzO*P^Bn&.#^'\''\lxi8 00:07:26.584 request: 00:07:26.584 { 00:07:26.584 "method": "nvmf_create_subsystem", 00:07:26.584 "params": { 00:07:26.584 "nqn": "nqn.2016-06.io.spdk:cnode16100", 00:07:26.584 "model_number": "5Z:X#&,f(MTmeuMj_M%;9SUnzO*P^Bn&.#^'\''\\lxi8" 00:07:26.584 } 00:07:26.584 } 00:07:26.584 Got JSON-RPC error response 00:07:26.584 GoRPCClient: error on JSON-RPC call' 00:07:26.584 19:23:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/07/15 19:23:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:5Z:X#&,f(MTmeuMj_M%;9SUnzO*P^Bn&.#^'\lxi8 nqn:nqn.2016-06.io.spdk:cnode16100], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 5Z:X#&,f(MTmeuMj_M%;9SUnzO*P^Bn&.#^'\lxi8 00:07:26.584 request: 00:07:26.584 { 00:07:26.584 "method": "nvmf_create_subsystem", 00:07:26.584 "params": { 00:07:26.584 "nqn": "nqn.2016-06.io.spdk:cnode16100", 00:07:26.584 "model_number": "5Z:X#&,f(MTmeuMj_M%;9SUnzO*P^Bn&.#^'\\lxi8" 00:07:26.584 } 00:07:26.584 } 00:07:26.584 Got JSON-RPC error response 00:07:26.584 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:26.584 19:23:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:26.842 [2024-07-15 19:23:16.452694] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.843 19:23:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:27.101 19:23:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:27.101 19:23:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:27.101 19:23:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:27.101 19:23:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:27.101 19:23:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:27.360 [2024-07-15 19:23:17.095403] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:27.360 19:23:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/07/15 19:23:17 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:07:27.360 request: 00:07:27.360 { 00:07:27.360 "method": "nvmf_subsystem_remove_listener", 00:07:27.360 "params": { 00:07:27.360 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:27.360 "listen_address": { 00:07:27.360 "trtype": "tcp", 00:07:27.360 "traddr": "", 00:07:27.360 "trsvcid": "4421" 00:07:27.360 } 00:07:27.360 } 00:07:27.360 } 00:07:27.360 Got JSON-RPC error response 00:07:27.360 GoRPCClient: error on JSON-RPC call' 00:07:27.360 19:23:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/07/15 19:23:17 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:07:27.360 request: 00:07:27.360 { 00:07:27.360 "method": "nvmf_subsystem_remove_listener", 00:07:27.360 "params": { 00:07:27.360 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:27.360 "listen_address": { 00:07:27.360 "trtype": "tcp", 00:07:27.360 "traddr": "", 00:07:27.360 "trsvcid": "4421" 00:07:27.360 } 00:07:27.360 } 00:07:27.360 } 00:07:27.360 Got JSON-RPC error response 00:07:27.360 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:27.360 19:23:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7544 -i 0 00:07:27.619 [2024-07-15 19:23:17.387607] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7544: invalid cntlid range [0-65519] 00:07:27.619 19:23:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/07/15 19:23:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode7544], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:07:27.619 request: 00:07:27.619 { 00:07:27.619 "method": "nvmf_create_subsystem", 00:07:27.619 "params": { 00:07:27.619 "nqn": "nqn.2016-06.io.spdk:cnode7544", 00:07:27.619 "min_cntlid": 0 00:07:27.619 } 00:07:27.619 } 00:07:27.619 Got JSON-RPC error response 00:07:27.619 GoRPCClient: error on JSON-RPC call' 00:07:27.619 19:23:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/07/15 19:23:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode7544], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:07:27.619 request: 00:07:27.619 { 00:07:27.619 "method": "nvmf_create_subsystem", 00:07:27.619 "params": { 00:07:27.619 "nqn": "nqn.2016-06.io.spdk:cnode7544", 00:07:27.619 "min_cntlid": 0 00:07:27.619 } 00:07:27.619 } 00:07:27.619 Got JSON-RPC error response 00:07:27.619 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:27.619 19:23:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27881 -i 65520 00:07:28.186 [2024-07-15 19:23:17.699889] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27881: invalid cntlid range [65520-65519] 00:07:28.186 19:23:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/07/15 19:23:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode27881], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:07:28.186 request: 00:07:28.186 { 00:07:28.186 "method": "nvmf_create_subsystem", 00:07:28.186 "params": { 00:07:28.186 "nqn": "nqn.2016-06.io.spdk:cnode27881", 00:07:28.186 "min_cntlid": 65520 00:07:28.186 } 00:07:28.186 } 00:07:28.186 Got JSON-RPC error response 00:07:28.186 GoRPCClient: error on JSON-RPC call' 00:07:28.186 19:23:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/07/15 19:23:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode27881], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:07:28.186 request: 00:07:28.186 { 00:07:28.186 "method": "nvmf_create_subsystem", 00:07:28.186 "params": { 00:07:28.186 "nqn": "nqn.2016-06.io.spdk:cnode27881", 00:07:28.186 "min_cntlid": 65520 00:07:28.186 } 00:07:28.186 } 00:07:28.186 Got JSON-RPC error response 00:07:28.186 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:28.186 19:23:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode199 -I 0 00:07:28.186 [2024-07-15 19:23:17.980151] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode199: invalid cntlid range [1-0] 00:07:28.446 19:23:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/07/15 19:23:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode199], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:07:28.446 request: 00:07:28.446 { 00:07:28.446 "method": "nvmf_create_subsystem", 00:07:28.446 "params": { 00:07:28.446 "nqn": "nqn.2016-06.io.spdk:cnode199", 00:07:28.446 "max_cntlid": 0 00:07:28.446 } 00:07:28.446 } 00:07:28.446 Got JSON-RPC error response 00:07:28.446 GoRPCClient: error on JSON-RPC call' 00:07:28.446 19:23:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/07/15 19:23:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode199], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:07:28.446 request: 00:07:28.446 { 00:07:28.446 "method": "nvmf_create_subsystem", 00:07:28.446 "params": { 00:07:28.446 "nqn": "nqn.2016-06.io.spdk:cnode199", 00:07:28.446 "max_cntlid": 0 00:07:28.446 } 00:07:28.446 } 00:07:28.446 Got JSON-RPC error response 00:07:28.446 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:28.446 19:23:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12385 -I 65520 00:07:28.705 [2024-07-15 19:23:18.259718] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12385: invalid cntlid range [1-65520] 00:07:28.705 19:23:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/07/15 19:23:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode12385], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:07:28.705 request: 00:07:28.705 { 00:07:28.705 "method": "nvmf_create_subsystem", 00:07:28.705 "params": { 00:07:28.705 "nqn": "nqn.2016-06.io.spdk:cnode12385", 00:07:28.705 "max_cntlid": 65520 00:07:28.705 } 00:07:28.705 } 00:07:28.705 Got JSON-RPC error response 00:07:28.705 GoRPCClient: error on JSON-RPC call' 00:07:28.705 19:23:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/07/15 19:23:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode12385], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:07:28.705 request: 00:07:28.705 { 00:07:28.705 "method": "nvmf_create_subsystem", 00:07:28.705 "params": { 00:07:28.705 "nqn": "nqn.2016-06.io.spdk:cnode12385", 00:07:28.705 "max_cntlid": 65520 00:07:28.705 } 00:07:28.705 } 00:07:28.705 Got JSON-RPC error response 00:07:28.705 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:28.705 19:23:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31287 -i 6 -I 5 00:07:28.964 [2024-07-15 19:23:18.547984] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31287: invalid cntlid range [6-5] 00:07:28.964 19:23:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/07/15 19:23:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode31287], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:07:28.964 request: 00:07:28.964 { 00:07:28.964 "method": "nvmf_create_subsystem", 00:07:28.964 "params": { 00:07:28.964 "nqn": "nqn.2016-06.io.spdk:cnode31287", 00:07:28.964 "min_cntlid": 6, 00:07:28.964 "max_cntlid": 5 00:07:28.964 } 00:07:28.964 } 00:07:28.964 Got JSON-RPC error response 00:07:28.964 GoRPCClient: error on JSON-RPC call' 00:07:28.964 19:23:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/07/15 19:23:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode31287], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:07:28.964 request: 00:07:28.964 { 00:07:28.964 "method": "nvmf_create_subsystem", 00:07:28.964 "params": { 00:07:28.964 "nqn": "nqn.2016-06.io.spdk:cnode31287", 00:07:28.964 "min_cntlid": 6, 00:07:28.964 "max_cntlid": 5 00:07:28.964 } 00:07:28.964 } 00:07:28.964 Got JSON-RPC error response 00:07:28.964 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:28.964 19:23:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:28.964 19:23:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:28.964 { 00:07:28.964 "name": "foobar", 00:07:28.964 "method": "nvmf_delete_target", 00:07:28.964 "req_id": 1 00:07:28.964 } 00:07:28.964 Got JSON-RPC error response 00:07:28.964 response: 00:07:28.964 { 00:07:28.964 "code": -32602, 00:07:28.964 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:28.964 }' 00:07:28.964 19:23:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:28.964 { 00:07:28.964 "name": "foobar", 00:07:28.964 "method": "nvmf_delete_target", 00:07:28.964 "req_id": 1 00:07:28.964 } 00:07:28.964 Got JSON-RPC error response 00:07:28.964 response: 00:07:28.964 { 00:07:28.964 "code": -32602, 00:07:28.964 "message": "The specified target doesn't exist, cannot delete it." 00:07:28.964 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:28.964 19:23:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:28.964 19:23:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:28.964 19:23:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:28.964 19:23:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:29.223 19:23:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:29.223 19:23:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:29.223 19:23:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:29.223 19:23:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:29.223 rmmod nvme_tcp 00:07:29.223 rmmod nvme_fabrics 00:07:29.223 rmmod nvme_keyring 00:07:29.223 19:23:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:29.223 19:23:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:29.223 19:23:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:29.223 19:23:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 67761 ']' 00:07:29.223 19:23:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 67761 00:07:29.223 19:23:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 67761 ']' 00:07:29.223 19:23:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 67761 00:07:29.223 19:23:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:07:29.223 19:23:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.223 19:23:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67761 00:07:29.223 killing process with pid 67761 00:07:29.223 19:23:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:29.223 19:23:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:29.223 19:23:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67761' 00:07:29.223 19:23:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 67761 00:07:29.223 19:23:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 67761 00:07:29.482 19:23:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:29.482 19:23:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:29.482 19:23:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:29.482 19:23:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:29.482 19:23:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:29.482 19:23:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.482 19:23:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.482 19:23:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.482 19:23:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:29.482 00:07:29.482 real 0m6.233s 00:07:29.482 user 0m25.159s 00:07:29.482 sys 0m1.274s 00:07:29.482 19:23:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.482 19:23:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:29.482 ************************************ 00:07:29.482 END TEST nvmf_invalid 00:07:29.482 ************************************ 00:07:29.482 19:23:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:29.483 19:23:19 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:29.483 19:23:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:29.483 19:23:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.483 19:23:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.483 ************************************ 00:07:29.483 START TEST nvmf_abort 00:07:29.483 ************************************ 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:29.483 * Looking for test storage... 00:07:29.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:29.483 Cannot find device "nvmf_tgt_br" 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:29.483 Cannot find device "nvmf_tgt_br2" 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:29.483 Cannot find device "nvmf_tgt_br" 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:29.483 Cannot find device "nvmf_tgt_br2" 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:07:29.483 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:29.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:29.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:29.742 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:30.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:07:30.001 00:07:30.001 --- 10.0.0.2 ping statistics --- 00:07:30.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.001 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:30.001 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:30.001 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:07:30.001 00:07:30.001 --- 10.0.0.3 ping statistics --- 00:07:30.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.001 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:30.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:07:30.001 00:07:30.001 --- 10.0.0.1 ping statistics --- 00:07:30.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.001 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=68250 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 68250 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 68250 ']' 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:30.001 19:23:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.001 [2024-07-15 19:23:19.638279] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:07:30.001 [2024-07-15 19:23:19.638388] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.001 [2024-07-15 19:23:19.771524] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.259 [2024-07-15 19:23:19.831263] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.259 [2024-07-15 19:23:19.831511] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.259 [2024-07-15 19:23:19.831594] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.259 [2024-07-15 19:23:19.831681] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.259 [2024-07-15 19:23:19.831752] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.259 [2024-07-15 19:23:19.834391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.259 [2024-07-15 19:23:19.834487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.259 [2024-07-15 19:23:19.834660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.194 [2024-07-15 19:23:20.701415] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.194 Malloc0 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.194 Delay0 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.194 [2024-07-15 19:23:20.764505] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.194 19:23:20 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:31.194 [2024-07-15 19:23:20.944519] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:33.719 Initializing NVMe Controllers 00:07:33.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:33.719 controller IO queue size 128 less than required 00:07:33.719 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:33.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:33.719 Initialization complete. Launching workers. 00:07:33.719 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31326 00:07:33.719 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31387, failed to submit 62 00:07:33.719 success 31330, unsuccess 57, failed 0 00:07:33.719 19:23:22 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:33.719 19:23:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.719 19:23:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.719 19:23:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.719 19:23:22 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:33.719 19:23:22 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:33.719 19:23:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:33.719 19:23:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:33.719 rmmod nvme_tcp 00:07:33.719 rmmod nvme_fabrics 00:07:33.719 rmmod nvme_keyring 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 68250 ']' 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 68250 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 68250 ']' 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 68250 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68250 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:33.719 killing process with pid 68250 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68250' 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 68250 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 68250 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:33.719 ************************************ 00:07:33.719 END TEST nvmf_abort 00:07:33.719 ************************************ 00:07:33.719 00:07:33.719 real 0m4.218s 00:07:33.719 user 0m12.217s 00:07:33.719 sys 0m0.994s 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.719 19:23:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.719 19:23:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:33.719 19:23:23 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:33.719 19:23:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:33.719 19:23:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.719 19:23:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:33.719 ************************************ 00:07:33.719 START TEST nvmf_ns_hotplug_stress 00:07:33.719 ************************************ 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:33.719 * Looking for test storage... 00:07:33.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.719 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:33.720 Cannot find device "nvmf_tgt_br" 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:07:33.720 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:33.978 Cannot find device "nvmf_tgt_br2" 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:33.978 Cannot find device "nvmf_tgt_br" 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:33.978 Cannot find device "nvmf_tgt_br2" 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:33.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:33.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:33.978 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:34.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:07:34.237 00:07:34.237 --- 10.0.0.2 ping statistics --- 00:07:34.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.237 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:34.237 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:34.237 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:07:34.237 00:07:34.237 --- 10.0.0.3 ping statistics --- 00:07:34.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.237 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:34.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:07:34.237 00:07:34.237 --- 10.0.0.1 ping statistics --- 00:07:34.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.237 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=68503 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 68503 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 68503 ']' 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.237 19:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:34.237 [2024-07-15 19:23:23.907395] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:07:34.237 [2024-07-15 19:23:23.907752] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.495 [2024-07-15 19:23:24.042609] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.495 [2024-07-15 19:23:24.112701] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.495 [2024-07-15 19:23:24.112983] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.495 [2024-07-15 19:23:24.113282] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.495 [2024-07-15 19:23:24.113562] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.495 [2024-07-15 19:23:24.113734] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.495 [2024-07-15 19:23:24.114041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.495 [2024-07-15 19:23:24.114134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.495 [2024-07-15 19:23:24.114146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.430 19:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.430 19:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:35.430 19:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:35.430 19:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:35.430 19:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:35.430 19:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.430 19:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:35.430 19:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:35.688 [2024-07-15 19:23:25.250369] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.688 19:23:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:35.946 19:23:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.204 [2024-07-15 19:23:25.860403] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.204 19:23:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:36.463 19:23:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:36.721 Malloc0 00:07:36.721 19:23:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:36.980 Delay0 00:07:36.980 19:23:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.237 19:23:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:37.496 NULL1 00:07:37.496 19:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:37.754 19:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=68644 00:07:37.754 19:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:37.754 19:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:37.754 19:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.140 Read completed with error (sct=0, sc=11) 00:07:39.140 19:23:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.398 19:23:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:39.398 19:23:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:39.656 true 00:07:39.656 19:23:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:39.656 19:23:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.220 19:23:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.478 19:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:40.478 19:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:40.735 true 00:07:40.993 19:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:40.993 19:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.993 19:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.250 19:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:41.250 19:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:41.508 true 00:07:41.508 19:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:41.508 19:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.766 19:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.024 19:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:42.024 19:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:42.282 true 00:07:42.282 19:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:42.282 19:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.215 19:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.473 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.473 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.473 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.473 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.473 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.473 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.473 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.730 19:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:43.730 19:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:44.039 true 00:07:44.039 19:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:44.039 19:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.619 19:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.877 19:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:44.877 19:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:45.134 true 00:07:45.134 19:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:45.134 19:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.697 19:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.697 19:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:45.697 19:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:45.953 true 00:07:46.209 19:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:46.209 19:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.466 19:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.723 19:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:46.723 19:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:46.980 true 00:07:46.980 19:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:46.980 19:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.238 19:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.495 19:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:47.495 19:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:47.752 true 00:07:47.752 19:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:47.752 19:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.685 19:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.943 19:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:48.943 19:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:49.201 true 00:07:49.201 19:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:49.201 19:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.458 19:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.716 19:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:49.716 19:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:49.973 true 00:07:50.232 19:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:50.232 19:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.490 19:23:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.490 19:23:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:50.490 19:23:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:50.748 true 00:07:50.748 19:23:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:50.748 19:23:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.681 19:23:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.938 19:23:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:51.938 19:23:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:52.196 true 00:07:52.196 19:23:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:52.196 19:23:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.462 19:23:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.721 19:23:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:52.721 19:23:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:52.979 true 00:07:52.979 19:23:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:52.979 19:23:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.237 19:23:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.495 19:23:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:53.495 19:23:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:53.752 true 00:07:53.752 19:23:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:53.752 19:23:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.704 19:23:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.962 19:23:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:54.962 19:23:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:55.220 true 00:07:55.220 19:23:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:55.220 19:23:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.478 19:23:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.736 19:23:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:55.737 19:23:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:55.995 true 00:07:55.995 19:23:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:55.995 19:23:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.254 19:23:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.512 19:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:56.512 19:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:57.078 true 00:07:57.078 19:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:57.078 19:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.643 19:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.902 19:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:57.902 19:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:58.469 true 00:07:58.469 19:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:58.469 19:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.727 19:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.985 19:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:58.985 19:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:59.243 true 00:07:59.243 19:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:07:59.243 19:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.501 19:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.758 19:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:59.758 19:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:00.016 true 00:08:00.016 19:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:08:00.016 19:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.274 19:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.531 19:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:00.531 19:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:00.788 true 00:08:01.046 19:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:08:01.046 19:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.613 19:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.886 19:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:01.886 19:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:02.453 true 00:08:02.453 19:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:08:02.453 19:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.453 19:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.711 19:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:02.711 19:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:03.276 true 00:08:03.276 19:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:08:03.276 19:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.534 19:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.791 19:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:03.791 19:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:04.049 true 00:08:04.049 19:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:08:04.049 19:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.307 19:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.564 19:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:04.564 19:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:04.823 true 00:08:04.823 19:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:08:04.823 19:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.757 19:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.015 19:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:06.015 19:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:06.274 true 00:08:06.274 19:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:08:06.274 19:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.532 19:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.791 19:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:06.791 19:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:07.049 true 00:08:07.049 19:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:08:07.049 19:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.307 19:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.564 19:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:07.564 19:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:07.821 true 00:08:07.822 19:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:08:07.822 19:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.753 Initializing NVMe Controllers 00:08:08.753 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:08.753 Controller IO queue size 128, less than required. 00:08:08.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:08.753 Controller IO queue size 128, less than required. 00:08:08.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:08.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:08.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:08.753 Initialization complete. Launching workers. 00:08:08.753 ======================================================== 00:08:08.753 Latency(us) 00:08:08.753 Device Information : IOPS MiB/s Average min max 00:08:08.753 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 599.00 0.29 87064.50 3570.04 1036421.98 00:08:08.753 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7711.70 3.77 16598.47 3502.38 613500.08 00:08:08.753 ======================================================== 00:08:08.753 Total : 8310.70 4.06 21677.36 3502.38 1036421.98 00:08:08.753 00:08:08.753 19:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.012 19:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:09.012 19:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:09.275 true 00:08:09.275 19:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68644 00:08:09.275 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (68644) - No such process 00:08:09.275 19:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 68644 00:08:09.275 19:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.533 19:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:09.791 19:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:09.791 19:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:09.791 19:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:09.791 19:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.791 19:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:10.048 null0 00:08:10.048 19:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.048 19:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.048 19:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:10.305 null1 00:08:10.305 19:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.305 19:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.305 19:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:10.564 null2 00:08:10.564 19:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.564 19:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.564 19:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:10.822 null3 00:08:10.822 19:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.822 19:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.822 19:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:11.080 null4 00:08:11.080 19:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:11.080 19:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:11.080 19:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:11.339 null5 00:08:11.339 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:11.339 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:11.339 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:11.597 null6 00:08:11.597 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:11.597 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:11.597 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:11.855 null7 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:11.855 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.856 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 69704 69705 69708 69710 69712 69713 69715 69717 00:08:12.113 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.113 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.113 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:12.113 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.113 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.113 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.372 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:12.372 19:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.372 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.372 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.372 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.372 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.372 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.372 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.372 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.372 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.372 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.372 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.372 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.373 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.631 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.631 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.631 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.631 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.631 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.631 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.631 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.631 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.631 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.631 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.631 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.631 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:12.631 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.631 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:12.631 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.889 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.889 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.889 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.889 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:12.889 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.889 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.889 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.889 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.889 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.889 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.889 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.148 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.148 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.148 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.148 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.148 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.148 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.148 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.148 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.148 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:13.148 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.148 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.148 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.148 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.148 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.148 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.148 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.148 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.148 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:13.406 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.406 19:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.406 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.406 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.406 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:13.406 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:13.406 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:13.663 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.663 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.663 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.663 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.663 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.664 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:13.664 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.664 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.664 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.664 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.664 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.664 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.664 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:13.664 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.664 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.664 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.664 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.664 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.664 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:13.922 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.922 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.922 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.922 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.922 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.922 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.922 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.922 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.922 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.922 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.922 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.181 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.181 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.181 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.181 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.181 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.181 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.181 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.181 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.181 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.181 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.181 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.181 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.181 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.181 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.181 19:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.439 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.439 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.439 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.439 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.439 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.439 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.439 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.439 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.439 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.439 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.439 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.439 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.439 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.439 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.439 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.697 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.697 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.697 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.697 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.697 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.697 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.697 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.955 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.955 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.955 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.955 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.955 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.955 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.955 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.955 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.955 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.955 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.955 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.955 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.955 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.955 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.955 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.955 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.955 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.955 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.955 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.212 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.212 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.212 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.212 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.212 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.212 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.212 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.212 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.212 19:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.477 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.477 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.477 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.477 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.477 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.477 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.477 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.477 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.477 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.477 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.477 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.477 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.477 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.477 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.735 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.735 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.735 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.735 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.735 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.735 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.735 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.735 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.735 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.735 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.735 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.735 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.993 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.993 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.993 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.993 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.993 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.993 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.993 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.993 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.251 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.251 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.251 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.251 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.251 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.251 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.251 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.251 19:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.509 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.509 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.509 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.509 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.509 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.509 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.509 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.509 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.509 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.509 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.509 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.509 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.509 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.767 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.767 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.767 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.767 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.767 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.767 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.767 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.767 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.767 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.767 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.025 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.025 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.025 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.025 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.025 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.025 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.283 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.283 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.283 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.283 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.283 19:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.283 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.283 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.283 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.283 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.283 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.283 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.541 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.541 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.541 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.541 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.541 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.541 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.541 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.541 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.798 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.799 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.799 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.799 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.799 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.799 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.799 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.799 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.799 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.057 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.057 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.057 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.057 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.057 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.057 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.057 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.057 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.057 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.057 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:18.314 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.315 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.315 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.315 19:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.315 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.315 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.315 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.315 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.315 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.315 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.315 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.315 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.573 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.573 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.573 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.573 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.573 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.573 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.573 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.573 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.573 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.830 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.830 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.830 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.830 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.831 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.831 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.831 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.089 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.089 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.089 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.089 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.089 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.089 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.089 19:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:19.347 rmmod nvme_tcp 00:08:19.347 rmmod nvme_fabrics 00:08:19.347 rmmod nvme_keyring 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 68503 ']' 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 68503 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 68503 ']' 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 68503 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:19.347 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68503 00:08:19.347 killing process with pid 68503 00:08:19.348 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:19.348 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:19.348 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68503' 00:08:19.348 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 68503 00:08:19.348 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 68503 00:08:19.606 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:19.606 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:19.606 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:19.606 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:19.606 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:19.606 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.606 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.606 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.606 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:19.606 00:08:19.606 real 0m45.910s 00:08:19.606 user 3m49.748s 00:08:19.606 sys 0m13.659s 00:08:19.606 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.606 19:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:19.606 ************************************ 00:08:19.606 END TEST nvmf_ns_hotplug_stress 00:08:19.606 ************************************ 00:08:19.606 19:24:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:19.606 19:24:09 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:19.606 19:24:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:19.606 19:24:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.606 19:24:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:19.606 ************************************ 00:08:19.606 START TEST nvmf_connect_stress 00:08:19.606 ************************************ 00:08:19.606 19:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:19.606 * Looking for test storage... 00:08:19.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:19.864 Cannot find device "nvmf_tgt_br" 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:19.864 Cannot find device "nvmf_tgt_br2" 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:19.864 Cannot find device "nvmf_tgt_br" 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:19.864 Cannot find device "nvmf_tgt_br2" 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:19.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:19.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:19.864 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:20.121 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:20.121 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:20.121 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:20.121 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:20.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:20.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:08:20.122 00:08:20.122 --- 10.0.0.2 ping statistics --- 00:08:20.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.122 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:20.122 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:20.122 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:08:20.122 00:08:20.122 --- 10.0.0.3 ping statistics --- 00:08:20.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.122 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:20.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:20.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:08:20.122 00:08:20.122 --- 10.0.0.1 ping statistics --- 00:08:20.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.122 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=71054 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 71054 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 71054 ']' 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:20.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:20.122 19:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.122 [2024-07-15 19:24:09.841750] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:08:20.122 [2024-07-15 19:24:09.841841] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.379 [2024-07-15 19:24:09.979677] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:20.379 [2024-07-15 19:24:10.041354] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.379 [2024-07-15 19:24:10.041426] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.379 [2024-07-15 19:24:10.041439] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.379 [2024-07-15 19:24:10.041447] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.379 [2024-07-15 19:24:10.041455] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.379 [2024-07-15 19:24:10.041589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.379 [2024-07-15 19:24:10.042121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.379 [2024-07-15 19:24:10.042137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.379 [2024-07-15 19:24:10.158902] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.379 [2024-07-15 19:24:10.176597] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.379 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.636 NULL1 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=71087 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.636 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.637 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.894 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.894 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:20.894 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:20.894 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.894 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.152 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.152 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:21.152 19:24:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.152 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.152 19:24:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.730 19:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.730 19:24:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:21.730 19:24:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.730 19:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.730 19:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.988 19:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.988 19:24:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:21.988 19:24:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.988 19:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.988 19:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.247 19:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.247 19:24:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:22.247 19:24:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.247 19:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.247 19:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.504 19:24:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.504 19:24:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:22.504 19:24:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.504 19:24:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.504 19:24:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.761 19:24:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.761 19:24:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:22.761 19:24:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.761 19:24:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.761 19:24:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.420 19:24:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.420 19:24:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:23.420 19:24:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.420 19:24:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.420 19:24:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.420 19:24:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.420 19:24:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:23.420 19:24:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.420 19:24:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.420 19:24:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.677 19:24:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.677 19:24:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:23.677 19:24:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.677 19:24:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.677 19:24:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.240 19:24:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.240 19:24:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:24.240 19:24:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.240 19:24:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.240 19:24:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.497 19:24:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.497 19:24:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:24.497 19:24:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.497 19:24:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.497 19:24:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.754 19:24:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.754 19:24:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:24.754 19:24:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.754 19:24:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.754 19:24:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.011 19:24:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.011 19:24:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:25.011 19:24:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.011 19:24:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.011 19:24:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.576 19:24:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.576 19:24:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:25.576 19:24:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.576 19:24:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.576 19:24:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.833 19:24:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.833 19:24:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:25.833 19:24:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.833 19:24:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.833 19:24:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.090 19:24:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.090 19:24:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:26.090 19:24:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.090 19:24:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.090 19:24:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.348 19:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.348 19:24:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:26.348 19:24:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.348 19:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.348 19:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.607 19:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.607 19:24:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:26.607 19:24:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.607 19:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.607 19:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.173 19:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.173 19:24:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:27.173 19:24:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.173 19:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.173 19:24:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.432 19:24:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.432 19:24:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:27.432 19:24:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.432 19:24:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.432 19:24:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.719 19:24:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.719 19:24:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:27.719 19:24:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.719 19:24:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.719 19:24:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.976 19:24:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.976 19:24:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:27.976 19:24:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.976 19:24:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.976 19:24:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.234 19:24:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.234 19:24:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:28.234 19:24:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.234 19:24:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.234 19:24:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.800 19:24:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.800 19:24:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:28.800 19:24:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.800 19:24:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.800 19:24:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.058 19:24:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.058 19:24:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:29.058 19:24:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.058 19:24:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.058 19:24:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.315 19:24:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.315 19:24:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:29.315 19:24:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.315 19:24:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.315 19:24:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.574 19:24:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.574 19:24:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:29.574 19:24:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.574 19:24:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.574 19:24:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.831 19:24:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.831 19:24:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:29.831 19:24:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.832 19:24:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.832 19:24:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.397 19:24:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.397 19:24:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:30.397 19:24:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.397 19:24:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.397 19:24:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.655 19:24:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.655 19:24:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:30.655 19:24:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.655 19:24:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.655 19:24:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.655 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71087 00:08:30.913 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (71087) - No such process 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 71087 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.913 rmmod nvme_tcp 00:08:30.913 rmmod nvme_fabrics 00:08:30.913 rmmod nvme_keyring 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 71054 ']' 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 71054 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 71054 ']' 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 71054 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71054 00:08:30.913 killing process with pid 71054 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71054' 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 71054 00:08:30.913 19:24:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 71054 00:08:31.172 19:24:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:31.172 19:24:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:31.172 19:24:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:31.172 19:24:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.172 19:24:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:31.172 19:24:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.172 19:24:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.172 19:24:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.172 19:24:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:31.172 ************************************ 00:08:31.172 END TEST nvmf_connect_stress 00:08:31.172 ************************************ 00:08:31.172 00:08:31.172 real 0m11.573s 00:08:31.172 user 0m38.454s 00:08:31.172 sys 0m3.441s 00:08:31.172 19:24:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.172 19:24:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.172 19:24:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:31.172 19:24:20 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:31.172 19:24:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:31.172 19:24:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.172 19:24:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:31.172 ************************************ 00:08:31.172 START TEST nvmf_fused_ordering 00:08:31.172 ************************************ 00:08:31.172 19:24:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:31.430 * Looking for test storage... 00:08:31.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:31.430 Cannot find device "nvmf_tgt_br" 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:31.430 Cannot find device "nvmf_tgt_br2" 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:31.430 Cannot find device "nvmf_tgt_br" 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:31.430 Cannot find device "nvmf_tgt_br2" 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:31.430 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:31.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.431 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:08:31.431 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:31.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.431 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:08:31.431 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:31.431 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:31.431 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:31.431 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:31.431 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:31.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:08:31.689 00:08:31.689 --- 10.0.0.2 ping statistics --- 00:08:31.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.689 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:31.689 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:31.689 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:08:31.689 00:08:31.689 --- 10.0.0.3 ping statistics --- 00:08:31.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.689 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:31.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:08:31.689 00:08:31.689 --- 10.0.0.1 ping statistics --- 00:08:31.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.689 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:31.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=71414 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 71414 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 71414 ']' 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.689 19:24:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:31.689 [2024-07-15 19:24:21.474688] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:08:31.689 [2024-07-15 19:24:21.474773] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.947 [2024-07-15 19:24:21.608335] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.947 [2024-07-15 19:24:21.668312] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.947 [2024-07-15 19:24:21.668378] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.947 [2024-07-15 19:24:21.668391] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.947 [2024-07-15 19:24:21.668400] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.947 [2024-07-15 19:24:21.668407] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.947 [2024-07-15 19:24:21.668438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:32.880 [2024-07-15 19:24:22.525374] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:32.880 [2024-07-15 19:24:22.541491] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:32.880 NULL1 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:32.880 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.881 19:24:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:32.881 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.881 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:32.881 19:24:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.881 19:24:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:32.881 [2024-07-15 19:24:22.594781] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:08:32.881 [2024-07-15 19:24:22.594859] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71471 ] 00:08:33.514 Attached to nqn.2016-06.io.spdk:cnode1 00:08:33.514 Namespace ID: 1 size: 1GB 00:08:33.514 fused_ordering(0) 00:08:33.514 fused_ordering(1) 00:08:33.514 fused_ordering(2) 00:08:33.514 fused_ordering(3) 00:08:33.514 fused_ordering(4) 00:08:33.514 fused_ordering(5) 00:08:33.514 fused_ordering(6) 00:08:33.514 fused_ordering(7) 00:08:33.514 fused_ordering(8) 00:08:33.514 fused_ordering(9) 00:08:33.514 fused_ordering(10) 00:08:33.514 fused_ordering(11) 00:08:33.514 fused_ordering(12) 00:08:33.514 fused_ordering(13) 00:08:33.514 fused_ordering(14) 00:08:33.514 fused_ordering(15) 00:08:33.514 fused_ordering(16) 00:08:33.514 fused_ordering(17) 00:08:33.514 fused_ordering(18) 00:08:33.514 fused_ordering(19) 00:08:33.514 fused_ordering(20) 00:08:33.514 fused_ordering(21) 00:08:33.514 fused_ordering(22) 00:08:33.514 fused_ordering(23) 00:08:33.514 fused_ordering(24) 00:08:33.514 fused_ordering(25) 00:08:33.514 fused_ordering(26) 00:08:33.514 fused_ordering(27) 00:08:33.514 fused_ordering(28) 00:08:33.514 fused_ordering(29) 00:08:33.514 fused_ordering(30) 00:08:33.514 fused_ordering(31) 00:08:33.514 fused_ordering(32) 00:08:33.514 fused_ordering(33) 00:08:33.514 fused_ordering(34) 00:08:33.514 fused_ordering(35) 00:08:33.514 fused_ordering(36) 00:08:33.515 fused_ordering(37) 00:08:33.515 fused_ordering(38) 00:08:33.515 fused_ordering(39) 00:08:33.515 fused_ordering(40) 00:08:33.515 fused_ordering(41) 00:08:33.515 fused_ordering(42) 00:08:33.515 fused_ordering(43) 00:08:33.515 fused_ordering(44) 00:08:33.515 fused_ordering(45) 00:08:33.515 fused_ordering(46) 00:08:33.515 fused_ordering(47) 00:08:33.515 fused_ordering(48) 00:08:33.515 fused_ordering(49) 00:08:33.515 fused_ordering(50) 00:08:33.515 fused_ordering(51) 00:08:33.515 fused_ordering(52) 00:08:33.515 fused_ordering(53) 00:08:33.515 fused_ordering(54) 00:08:33.515 fused_ordering(55) 00:08:33.515 fused_ordering(56) 00:08:33.515 fused_ordering(57) 00:08:33.515 fused_ordering(58) 00:08:33.515 fused_ordering(59) 00:08:33.515 fused_ordering(60) 00:08:33.515 fused_ordering(61) 00:08:33.515 fused_ordering(62) 00:08:33.515 fused_ordering(63) 00:08:33.515 fused_ordering(64) 00:08:33.515 fused_ordering(65) 00:08:33.515 fused_ordering(66) 00:08:33.515 fused_ordering(67) 00:08:33.515 fused_ordering(68) 00:08:33.515 fused_ordering(69) 00:08:33.515 fused_ordering(70) 00:08:33.515 fused_ordering(71) 00:08:33.515 fused_ordering(72) 00:08:33.515 fused_ordering(73) 00:08:33.515 fused_ordering(74) 00:08:33.515 fused_ordering(75) 00:08:33.515 fused_ordering(76) 00:08:33.515 fused_ordering(77) 00:08:33.515 fused_ordering(78) 00:08:33.515 fused_ordering(79) 00:08:33.515 fused_ordering(80) 00:08:33.515 fused_ordering(81) 00:08:33.515 fused_ordering(82) 00:08:33.515 fused_ordering(83) 00:08:33.515 fused_ordering(84) 00:08:33.515 fused_ordering(85) 00:08:33.515 fused_ordering(86) 00:08:33.515 fused_ordering(87) 00:08:33.515 fused_ordering(88) 00:08:33.515 fused_ordering(89) 00:08:33.515 fused_ordering(90) 00:08:33.515 fused_ordering(91) 00:08:33.515 fused_ordering(92) 00:08:33.515 fused_ordering(93) 00:08:33.515 fused_ordering(94) 00:08:33.515 fused_ordering(95) 00:08:33.515 fused_ordering(96) 00:08:33.515 fused_ordering(97) 00:08:33.515 fused_ordering(98) 00:08:33.515 fused_ordering(99) 00:08:33.515 fused_ordering(100) 00:08:33.515 fused_ordering(101) 00:08:33.515 fused_ordering(102) 00:08:33.515 fused_ordering(103) 00:08:33.515 fused_ordering(104) 00:08:33.515 fused_ordering(105) 00:08:33.515 fused_ordering(106) 00:08:33.515 fused_ordering(107) 00:08:33.515 fused_ordering(108) 00:08:33.515 fused_ordering(109) 00:08:33.515 fused_ordering(110) 00:08:33.515 fused_ordering(111) 00:08:33.515 fused_ordering(112) 00:08:33.515 fused_ordering(113) 00:08:33.515 fused_ordering(114) 00:08:33.515 fused_ordering(115) 00:08:33.515 fused_ordering(116) 00:08:33.515 fused_ordering(117) 00:08:33.515 fused_ordering(118) 00:08:33.515 fused_ordering(119) 00:08:33.515 fused_ordering(120) 00:08:33.515 fused_ordering(121) 00:08:33.515 fused_ordering(122) 00:08:33.515 fused_ordering(123) 00:08:33.515 fused_ordering(124) 00:08:33.515 fused_ordering(125) 00:08:33.515 fused_ordering(126) 00:08:33.515 fused_ordering(127) 00:08:33.515 fused_ordering(128) 00:08:33.515 fused_ordering(129) 00:08:33.515 fused_ordering(130) 00:08:33.515 fused_ordering(131) 00:08:33.515 fused_ordering(132) 00:08:33.515 fused_ordering(133) 00:08:33.515 fused_ordering(134) 00:08:33.515 fused_ordering(135) 00:08:33.515 fused_ordering(136) 00:08:33.515 fused_ordering(137) 00:08:33.515 fused_ordering(138) 00:08:33.515 fused_ordering(139) 00:08:33.515 fused_ordering(140) 00:08:33.515 fused_ordering(141) 00:08:33.515 fused_ordering(142) 00:08:33.515 fused_ordering(143) 00:08:33.515 fused_ordering(144) 00:08:33.515 fused_ordering(145) 00:08:33.515 fused_ordering(146) 00:08:33.515 fused_ordering(147) 00:08:33.515 fused_ordering(148) 00:08:33.515 fused_ordering(149) 00:08:33.515 fused_ordering(150) 00:08:33.515 fused_ordering(151) 00:08:33.515 fused_ordering(152) 00:08:33.515 fused_ordering(153) 00:08:33.515 fused_ordering(154) 00:08:33.515 fused_ordering(155) 00:08:33.515 fused_ordering(156) 00:08:33.515 fused_ordering(157) 00:08:33.515 fused_ordering(158) 00:08:33.515 fused_ordering(159) 00:08:33.515 fused_ordering(160) 00:08:33.515 fused_ordering(161) 00:08:33.515 fused_ordering(162) 00:08:33.515 fused_ordering(163) 00:08:33.515 fused_ordering(164) 00:08:33.515 fused_ordering(165) 00:08:33.515 fused_ordering(166) 00:08:33.515 fused_ordering(167) 00:08:33.515 fused_ordering(168) 00:08:33.515 fused_ordering(169) 00:08:33.515 fused_ordering(170) 00:08:33.515 fused_ordering(171) 00:08:33.515 fused_ordering(172) 00:08:33.515 fused_ordering(173) 00:08:33.515 fused_ordering(174) 00:08:33.515 fused_ordering(175) 00:08:33.515 fused_ordering(176) 00:08:33.515 fused_ordering(177) 00:08:33.515 fused_ordering(178) 00:08:33.515 fused_ordering(179) 00:08:33.515 fused_ordering(180) 00:08:33.515 fused_ordering(181) 00:08:33.515 fused_ordering(182) 00:08:33.515 fused_ordering(183) 00:08:33.515 fused_ordering(184) 00:08:33.515 fused_ordering(185) 00:08:33.515 fused_ordering(186) 00:08:33.515 fused_ordering(187) 00:08:33.515 fused_ordering(188) 00:08:33.515 fused_ordering(189) 00:08:33.515 fused_ordering(190) 00:08:33.515 fused_ordering(191) 00:08:33.515 fused_ordering(192) 00:08:33.515 fused_ordering(193) 00:08:33.515 fused_ordering(194) 00:08:33.515 fused_ordering(195) 00:08:33.515 fused_ordering(196) 00:08:33.515 fused_ordering(197) 00:08:33.515 fused_ordering(198) 00:08:33.515 fused_ordering(199) 00:08:33.515 fused_ordering(200) 00:08:33.515 fused_ordering(201) 00:08:33.515 fused_ordering(202) 00:08:33.515 fused_ordering(203) 00:08:33.515 fused_ordering(204) 00:08:33.515 fused_ordering(205) 00:08:33.772 fused_ordering(206) 00:08:33.772 fused_ordering(207) 00:08:33.772 fused_ordering(208) 00:08:33.772 fused_ordering(209) 00:08:33.772 fused_ordering(210) 00:08:33.772 fused_ordering(211) 00:08:33.772 fused_ordering(212) 00:08:33.772 fused_ordering(213) 00:08:33.772 fused_ordering(214) 00:08:33.772 fused_ordering(215) 00:08:33.772 fused_ordering(216) 00:08:33.772 fused_ordering(217) 00:08:33.772 fused_ordering(218) 00:08:33.772 fused_ordering(219) 00:08:33.772 fused_ordering(220) 00:08:33.772 fused_ordering(221) 00:08:33.772 fused_ordering(222) 00:08:33.772 fused_ordering(223) 00:08:33.772 fused_ordering(224) 00:08:33.772 fused_ordering(225) 00:08:33.772 fused_ordering(226) 00:08:33.772 fused_ordering(227) 00:08:33.772 fused_ordering(228) 00:08:33.773 fused_ordering(229) 00:08:33.773 fused_ordering(230) 00:08:33.773 fused_ordering(231) 00:08:33.773 fused_ordering(232) 00:08:33.773 fused_ordering(233) 00:08:33.773 fused_ordering(234) 00:08:33.773 fused_ordering(235) 00:08:33.773 fused_ordering(236) 00:08:33.773 fused_ordering(237) 00:08:33.773 fused_ordering(238) 00:08:33.773 fused_ordering(239) 00:08:33.773 fused_ordering(240) 00:08:33.773 fused_ordering(241) 00:08:33.773 fused_ordering(242) 00:08:33.773 fused_ordering(243) 00:08:33.773 fused_ordering(244) 00:08:33.773 fused_ordering(245) 00:08:33.773 fused_ordering(246) 00:08:33.773 fused_ordering(247) 00:08:33.773 fused_ordering(248) 00:08:33.773 fused_ordering(249) 00:08:33.773 fused_ordering(250) 00:08:33.773 fused_ordering(251) 00:08:33.773 fused_ordering(252) 00:08:33.773 fused_ordering(253) 00:08:33.773 fused_ordering(254) 00:08:33.773 fused_ordering(255) 00:08:33.773 fused_ordering(256) 00:08:33.773 fused_ordering(257) 00:08:33.773 fused_ordering(258) 00:08:33.773 fused_ordering(259) 00:08:33.773 fused_ordering(260) 00:08:33.773 fused_ordering(261) 00:08:33.773 fused_ordering(262) 00:08:33.773 fused_ordering(263) 00:08:33.773 fused_ordering(264) 00:08:33.773 fused_ordering(265) 00:08:33.773 fused_ordering(266) 00:08:33.773 fused_ordering(267) 00:08:33.773 fused_ordering(268) 00:08:33.773 fused_ordering(269) 00:08:33.773 fused_ordering(270) 00:08:33.773 fused_ordering(271) 00:08:33.773 fused_ordering(272) 00:08:33.773 fused_ordering(273) 00:08:33.773 fused_ordering(274) 00:08:33.773 fused_ordering(275) 00:08:33.773 fused_ordering(276) 00:08:33.773 fused_ordering(277) 00:08:33.773 fused_ordering(278) 00:08:33.773 fused_ordering(279) 00:08:33.773 fused_ordering(280) 00:08:33.773 fused_ordering(281) 00:08:33.773 fused_ordering(282) 00:08:33.773 fused_ordering(283) 00:08:33.773 fused_ordering(284) 00:08:33.773 fused_ordering(285) 00:08:33.773 fused_ordering(286) 00:08:33.773 fused_ordering(287) 00:08:33.773 fused_ordering(288) 00:08:33.773 fused_ordering(289) 00:08:33.773 fused_ordering(290) 00:08:33.773 fused_ordering(291) 00:08:33.773 fused_ordering(292) 00:08:33.773 fused_ordering(293) 00:08:33.773 fused_ordering(294) 00:08:33.773 fused_ordering(295) 00:08:33.773 fused_ordering(296) 00:08:33.773 fused_ordering(297) 00:08:33.773 fused_ordering(298) 00:08:33.773 fused_ordering(299) 00:08:33.773 fused_ordering(300) 00:08:33.773 fused_ordering(301) 00:08:33.773 fused_ordering(302) 00:08:33.773 fused_ordering(303) 00:08:33.773 fused_ordering(304) 00:08:33.773 fused_ordering(305) 00:08:33.773 fused_ordering(306) 00:08:33.773 fused_ordering(307) 00:08:33.773 fused_ordering(308) 00:08:33.773 fused_ordering(309) 00:08:33.773 fused_ordering(310) 00:08:33.773 fused_ordering(311) 00:08:33.773 fused_ordering(312) 00:08:33.773 fused_ordering(313) 00:08:33.773 fused_ordering(314) 00:08:33.773 fused_ordering(315) 00:08:33.773 fused_ordering(316) 00:08:33.773 fused_ordering(317) 00:08:33.773 fused_ordering(318) 00:08:33.773 fused_ordering(319) 00:08:33.773 fused_ordering(320) 00:08:33.773 fused_ordering(321) 00:08:33.773 fused_ordering(322) 00:08:33.773 fused_ordering(323) 00:08:33.773 fused_ordering(324) 00:08:33.773 fused_ordering(325) 00:08:33.773 fused_ordering(326) 00:08:33.773 fused_ordering(327) 00:08:33.773 fused_ordering(328) 00:08:33.773 fused_ordering(329) 00:08:33.773 fused_ordering(330) 00:08:33.773 fused_ordering(331) 00:08:33.773 fused_ordering(332) 00:08:33.773 fused_ordering(333) 00:08:33.773 fused_ordering(334) 00:08:33.773 fused_ordering(335) 00:08:33.773 fused_ordering(336) 00:08:33.773 fused_ordering(337) 00:08:33.773 fused_ordering(338) 00:08:33.773 fused_ordering(339) 00:08:33.773 fused_ordering(340) 00:08:33.773 fused_ordering(341) 00:08:33.773 fused_ordering(342) 00:08:33.773 fused_ordering(343) 00:08:33.773 fused_ordering(344) 00:08:33.773 fused_ordering(345) 00:08:33.773 fused_ordering(346) 00:08:33.773 fused_ordering(347) 00:08:33.773 fused_ordering(348) 00:08:33.773 fused_ordering(349) 00:08:33.773 fused_ordering(350) 00:08:33.773 fused_ordering(351) 00:08:33.773 fused_ordering(352) 00:08:33.773 fused_ordering(353) 00:08:33.773 fused_ordering(354) 00:08:33.773 fused_ordering(355) 00:08:33.773 fused_ordering(356) 00:08:33.773 fused_ordering(357) 00:08:33.773 fused_ordering(358) 00:08:33.773 fused_ordering(359) 00:08:33.773 fused_ordering(360) 00:08:33.773 fused_ordering(361) 00:08:33.773 fused_ordering(362) 00:08:33.773 fused_ordering(363) 00:08:33.773 fused_ordering(364) 00:08:33.773 fused_ordering(365) 00:08:33.773 fused_ordering(366) 00:08:33.773 fused_ordering(367) 00:08:33.773 fused_ordering(368) 00:08:33.773 fused_ordering(369) 00:08:33.773 fused_ordering(370) 00:08:33.773 fused_ordering(371) 00:08:33.773 fused_ordering(372) 00:08:33.773 fused_ordering(373) 00:08:33.773 fused_ordering(374) 00:08:33.773 fused_ordering(375) 00:08:33.773 fused_ordering(376) 00:08:33.773 fused_ordering(377) 00:08:33.773 fused_ordering(378) 00:08:33.773 fused_ordering(379) 00:08:33.773 fused_ordering(380) 00:08:33.773 fused_ordering(381) 00:08:33.773 fused_ordering(382) 00:08:33.773 fused_ordering(383) 00:08:33.773 fused_ordering(384) 00:08:33.773 fused_ordering(385) 00:08:33.773 fused_ordering(386) 00:08:33.773 fused_ordering(387) 00:08:33.773 fused_ordering(388) 00:08:33.773 fused_ordering(389) 00:08:33.773 fused_ordering(390) 00:08:33.773 fused_ordering(391) 00:08:33.773 fused_ordering(392) 00:08:33.773 fused_ordering(393) 00:08:33.773 fused_ordering(394) 00:08:33.773 fused_ordering(395) 00:08:33.773 fused_ordering(396) 00:08:33.773 fused_ordering(397) 00:08:33.773 fused_ordering(398) 00:08:33.773 fused_ordering(399) 00:08:33.773 fused_ordering(400) 00:08:33.773 fused_ordering(401) 00:08:33.773 fused_ordering(402) 00:08:33.773 fused_ordering(403) 00:08:33.773 fused_ordering(404) 00:08:33.773 fused_ordering(405) 00:08:33.773 fused_ordering(406) 00:08:33.773 fused_ordering(407) 00:08:33.773 fused_ordering(408) 00:08:33.773 fused_ordering(409) 00:08:33.773 fused_ordering(410) 00:08:34.031 fused_ordering(411) 00:08:34.031 fused_ordering(412) 00:08:34.031 fused_ordering(413) 00:08:34.031 fused_ordering(414) 00:08:34.031 fused_ordering(415) 00:08:34.031 fused_ordering(416) 00:08:34.031 fused_ordering(417) 00:08:34.031 fused_ordering(418) 00:08:34.031 fused_ordering(419) 00:08:34.031 fused_ordering(420) 00:08:34.031 fused_ordering(421) 00:08:34.031 fused_ordering(422) 00:08:34.031 fused_ordering(423) 00:08:34.031 fused_ordering(424) 00:08:34.031 fused_ordering(425) 00:08:34.031 fused_ordering(426) 00:08:34.031 fused_ordering(427) 00:08:34.031 fused_ordering(428) 00:08:34.031 fused_ordering(429) 00:08:34.031 fused_ordering(430) 00:08:34.031 fused_ordering(431) 00:08:34.031 fused_ordering(432) 00:08:34.031 fused_ordering(433) 00:08:34.031 fused_ordering(434) 00:08:34.031 fused_ordering(435) 00:08:34.031 fused_ordering(436) 00:08:34.031 fused_ordering(437) 00:08:34.031 fused_ordering(438) 00:08:34.031 fused_ordering(439) 00:08:34.031 fused_ordering(440) 00:08:34.031 fused_ordering(441) 00:08:34.031 fused_ordering(442) 00:08:34.031 fused_ordering(443) 00:08:34.031 fused_ordering(444) 00:08:34.031 fused_ordering(445) 00:08:34.031 fused_ordering(446) 00:08:34.031 fused_ordering(447) 00:08:34.031 fused_ordering(448) 00:08:34.031 fused_ordering(449) 00:08:34.031 fused_ordering(450) 00:08:34.031 fused_ordering(451) 00:08:34.031 fused_ordering(452) 00:08:34.031 fused_ordering(453) 00:08:34.031 fused_ordering(454) 00:08:34.031 fused_ordering(455) 00:08:34.031 fused_ordering(456) 00:08:34.031 fused_ordering(457) 00:08:34.031 fused_ordering(458) 00:08:34.031 fused_ordering(459) 00:08:34.031 fused_ordering(460) 00:08:34.031 fused_ordering(461) 00:08:34.031 fused_ordering(462) 00:08:34.031 fused_ordering(463) 00:08:34.031 fused_ordering(464) 00:08:34.031 fused_ordering(465) 00:08:34.031 fused_ordering(466) 00:08:34.031 fused_ordering(467) 00:08:34.031 fused_ordering(468) 00:08:34.031 fused_ordering(469) 00:08:34.031 fused_ordering(470) 00:08:34.031 fused_ordering(471) 00:08:34.031 fused_ordering(472) 00:08:34.031 fused_ordering(473) 00:08:34.031 fused_ordering(474) 00:08:34.031 fused_ordering(475) 00:08:34.031 fused_ordering(476) 00:08:34.031 fused_ordering(477) 00:08:34.031 fused_ordering(478) 00:08:34.031 fused_ordering(479) 00:08:34.031 fused_ordering(480) 00:08:34.031 fused_ordering(481) 00:08:34.031 fused_ordering(482) 00:08:34.031 fused_ordering(483) 00:08:34.031 fused_ordering(484) 00:08:34.031 fused_ordering(485) 00:08:34.031 fused_ordering(486) 00:08:34.031 fused_ordering(487) 00:08:34.031 fused_ordering(488) 00:08:34.031 fused_ordering(489) 00:08:34.031 fused_ordering(490) 00:08:34.031 fused_ordering(491) 00:08:34.031 fused_ordering(492) 00:08:34.031 fused_ordering(493) 00:08:34.031 fused_ordering(494) 00:08:34.031 fused_ordering(495) 00:08:34.031 fused_ordering(496) 00:08:34.031 fused_ordering(497) 00:08:34.031 fused_ordering(498) 00:08:34.031 fused_ordering(499) 00:08:34.031 fused_ordering(500) 00:08:34.031 fused_ordering(501) 00:08:34.031 fused_ordering(502) 00:08:34.031 fused_ordering(503) 00:08:34.031 fused_ordering(504) 00:08:34.031 fused_ordering(505) 00:08:34.031 fused_ordering(506) 00:08:34.031 fused_ordering(507) 00:08:34.031 fused_ordering(508) 00:08:34.031 fused_ordering(509) 00:08:34.031 fused_ordering(510) 00:08:34.031 fused_ordering(511) 00:08:34.031 fused_ordering(512) 00:08:34.031 fused_ordering(513) 00:08:34.031 fused_ordering(514) 00:08:34.031 fused_ordering(515) 00:08:34.031 fused_ordering(516) 00:08:34.031 fused_ordering(517) 00:08:34.031 fused_ordering(518) 00:08:34.031 fused_ordering(519) 00:08:34.031 fused_ordering(520) 00:08:34.031 fused_ordering(521) 00:08:34.031 fused_ordering(522) 00:08:34.031 fused_ordering(523) 00:08:34.031 fused_ordering(524) 00:08:34.031 fused_ordering(525) 00:08:34.031 fused_ordering(526) 00:08:34.031 fused_ordering(527) 00:08:34.031 fused_ordering(528) 00:08:34.031 fused_ordering(529) 00:08:34.031 fused_ordering(530) 00:08:34.031 fused_ordering(531) 00:08:34.031 fused_ordering(532) 00:08:34.031 fused_ordering(533) 00:08:34.031 fused_ordering(534) 00:08:34.031 fused_ordering(535) 00:08:34.031 fused_ordering(536) 00:08:34.031 fused_ordering(537) 00:08:34.031 fused_ordering(538) 00:08:34.031 fused_ordering(539) 00:08:34.031 fused_ordering(540) 00:08:34.031 fused_ordering(541) 00:08:34.031 fused_ordering(542) 00:08:34.031 fused_ordering(543) 00:08:34.031 fused_ordering(544) 00:08:34.031 fused_ordering(545) 00:08:34.031 fused_ordering(546) 00:08:34.031 fused_ordering(547) 00:08:34.031 fused_ordering(548) 00:08:34.031 fused_ordering(549) 00:08:34.031 fused_ordering(550) 00:08:34.031 fused_ordering(551) 00:08:34.031 fused_ordering(552) 00:08:34.031 fused_ordering(553) 00:08:34.031 fused_ordering(554) 00:08:34.031 fused_ordering(555) 00:08:34.031 fused_ordering(556) 00:08:34.031 fused_ordering(557) 00:08:34.031 fused_ordering(558) 00:08:34.031 fused_ordering(559) 00:08:34.031 fused_ordering(560) 00:08:34.031 fused_ordering(561) 00:08:34.031 fused_ordering(562) 00:08:34.031 fused_ordering(563) 00:08:34.031 fused_ordering(564) 00:08:34.031 fused_ordering(565) 00:08:34.031 fused_ordering(566) 00:08:34.031 fused_ordering(567) 00:08:34.031 fused_ordering(568) 00:08:34.031 fused_ordering(569) 00:08:34.031 fused_ordering(570) 00:08:34.031 fused_ordering(571) 00:08:34.031 fused_ordering(572) 00:08:34.031 fused_ordering(573) 00:08:34.031 fused_ordering(574) 00:08:34.031 fused_ordering(575) 00:08:34.031 fused_ordering(576) 00:08:34.031 fused_ordering(577) 00:08:34.031 fused_ordering(578) 00:08:34.031 fused_ordering(579) 00:08:34.031 fused_ordering(580) 00:08:34.031 fused_ordering(581) 00:08:34.031 fused_ordering(582) 00:08:34.031 fused_ordering(583) 00:08:34.031 fused_ordering(584) 00:08:34.031 fused_ordering(585) 00:08:34.031 fused_ordering(586) 00:08:34.031 fused_ordering(587) 00:08:34.031 fused_ordering(588) 00:08:34.031 fused_ordering(589) 00:08:34.031 fused_ordering(590) 00:08:34.031 fused_ordering(591) 00:08:34.031 fused_ordering(592) 00:08:34.031 fused_ordering(593) 00:08:34.031 fused_ordering(594) 00:08:34.031 fused_ordering(595) 00:08:34.031 fused_ordering(596) 00:08:34.031 fused_ordering(597) 00:08:34.031 fused_ordering(598) 00:08:34.031 fused_ordering(599) 00:08:34.031 fused_ordering(600) 00:08:34.031 fused_ordering(601) 00:08:34.031 fused_ordering(602) 00:08:34.031 fused_ordering(603) 00:08:34.031 fused_ordering(604) 00:08:34.031 fused_ordering(605) 00:08:34.031 fused_ordering(606) 00:08:34.031 fused_ordering(607) 00:08:34.031 fused_ordering(608) 00:08:34.031 fused_ordering(609) 00:08:34.031 fused_ordering(610) 00:08:34.031 fused_ordering(611) 00:08:34.031 fused_ordering(612) 00:08:34.031 fused_ordering(613) 00:08:34.031 fused_ordering(614) 00:08:34.031 fused_ordering(615) 00:08:34.597 fused_ordering(616) 00:08:34.597 fused_ordering(617) 00:08:34.597 fused_ordering(618) 00:08:34.597 fused_ordering(619) 00:08:34.597 fused_ordering(620) 00:08:34.597 fused_ordering(621) 00:08:34.597 fused_ordering(622) 00:08:34.597 fused_ordering(623) 00:08:34.597 fused_ordering(624) 00:08:34.597 fused_ordering(625) 00:08:34.597 fused_ordering(626) 00:08:34.597 fused_ordering(627) 00:08:34.597 fused_ordering(628) 00:08:34.597 fused_ordering(629) 00:08:34.597 fused_ordering(630) 00:08:34.597 fused_ordering(631) 00:08:34.597 fused_ordering(632) 00:08:34.597 fused_ordering(633) 00:08:34.597 fused_ordering(634) 00:08:34.597 fused_ordering(635) 00:08:34.597 fused_ordering(636) 00:08:34.597 fused_ordering(637) 00:08:34.597 fused_ordering(638) 00:08:34.597 fused_ordering(639) 00:08:34.597 fused_ordering(640) 00:08:34.597 fused_ordering(641) 00:08:34.597 fused_ordering(642) 00:08:34.597 fused_ordering(643) 00:08:34.597 fused_ordering(644) 00:08:34.597 fused_ordering(645) 00:08:34.597 fused_ordering(646) 00:08:34.597 fused_ordering(647) 00:08:34.597 fused_ordering(648) 00:08:34.597 fused_ordering(649) 00:08:34.597 fused_ordering(650) 00:08:34.597 fused_ordering(651) 00:08:34.597 fused_ordering(652) 00:08:34.597 fused_ordering(653) 00:08:34.597 fused_ordering(654) 00:08:34.597 fused_ordering(655) 00:08:34.597 fused_ordering(656) 00:08:34.597 fused_ordering(657) 00:08:34.597 fused_ordering(658) 00:08:34.597 fused_ordering(659) 00:08:34.597 fused_ordering(660) 00:08:34.597 fused_ordering(661) 00:08:34.597 fused_ordering(662) 00:08:34.597 fused_ordering(663) 00:08:34.597 fused_ordering(664) 00:08:34.597 fused_ordering(665) 00:08:34.597 fused_ordering(666) 00:08:34.597 fused_ordering(667) 00:08:34.597 fused_ordering(668) 00:08:34.597 fused_ordering(669) 00:08:34.597 fused_ordering(670) 00:08:34.597 fused_ordering(671) 00:08:34.597 fused_ordering(672) 00:08:34.597 fused_ordering(673) 00:08:34.597 fused_ordering(674) 00:08:34.597 fused_ordering(675) 00:08:34.597 fused_ordering(676) 00:08:34.597 fused_ordering(677) 00:08:34.597 fused_ordering(678) 00:08:34.597 fused_ordering(679) 00:08:34.597 fused_ordering(680) 00:08:34.597 fused_ordering(681) 00:08:34.597 fused_ordering(682) 00:08:34.597 fused_ordering(683) 00:08:34.597 fused_ordering(684) 00:08:34.597 fused_ordering(685) 00:08:34.597 fused_ordering(686) 00:08:34.597 fused_ordering(687) 00:08:34.597 fused_ordering(688) 00:08:34.597 fused_ordering(689) 00:08:34.597 fused_ordering(690) 00:08:34.597 fused_ordering(691) 00:08:34.597 fused_ordering(692) 00:08:34.597 fused_ordering(693) 00:08:34.597 fused_ordering(694) 00:08:34.597 fused_ordering(695) 00:08:34.597 fused_ordering(696) 00:08:34.597 fused_ordering(697) 00:08:34.597 fused_ordering(698) 00:08:34.597 fused_ordering(699) 00:08:34.597 fused_ordering(700) 00:08:34.597 fused_ordering(701) 00:08:34.597 fused_ordering(702) 00:08:34.597 fused_ordering(703) 00:08:34.597 fused_ordering(704) 00:08:34.597 fused_ordering(705) 00:08:34.597 fused_ordering(706) 00:08:34.597 fused_ordering(707) 00:08:34.597 fused_ordering(708) 00:08:34.597 fused_ordering(709) 00:08:34.597 fused_ordering(710) 00:08:34.597 fused_ordering(711) 00:08:34.597 fused_ordering(712) 00:08:34.597 fused_ordering(713) 00:08:34.597 fused_ordering(714) 00:08:34.597 fused_ordering(715) 00:08:34.597 fused_ordering(716) 00:08:34.597 fused_ordering(717) 00:08:34.597 fused_ordering(718) 00:08:34.597 fused_ordering(719) 00:08:34.597 fused_ordering(720) 00:08:34.597 fused_ordering(721) 00:08:34.597 fused_ordering(722) 00:08:34.597 fused_ordering(723) 00:08:34.597 fused_ordering(724) 00:08:34.597 fused_ordering(725) 00:08:34.597 fused_ordering(726) 00:08:34.597 fused_ordering(727) 00:08:34.597 fused_ordering(728) 00:08:34.597 fused_ordering(729) 00:08:34.597 fused_ordering(730) 00:08:34.597 fused_ordering(731) 00:08:34.597 fused_ordering(732) 00:08:34.598 fused_ordering(733) 00:08:34.598 fused_ordering(734) 00:08:34.598 fused_ordering(735) 00:08:34.598 fused_ordering(736) 00:08:34.598 fused_ordering(737) 00:08:34.598 fused_ordering(738) 00:08:34.598 fused_ordering(739) 00:08:34.598 fused_ordering(740) 00:08:34.598 fused_ordering(741) 00:08:34.598 fused_ordering(742) 00:08:34.598 fused_ordering(743) 00:08:34.598 fused_ordering(744) 00:08:34.598 fused_ordering(745) 00:08:34.598 fused_ordering(746) 00:08:34.598 fused_ordering(747) 00:08:34.598 fused_ordering(748) 00:08:34.598 fused_ordering(749) 00:08:34.598 fused_ordering(750) 00:08:34.598 fused_ordering(751) 00:08:34.598 fused_ordering(752) 00:08:34.598 fused_ordering(753) 00:08:34.598 fused_ordering(754) 00:08:34.598 fused_ordering(755) 00:08:34.598 fused_ordering(756) 00:08:34.598 fused_ordering(757) 00:08:34.598 fused_ordering(758) 00:08:34.598 fused_ordering(759) 00:08:34.598 fused_ordering(760) 00:08:34.598 fused_ordering(761) 00:08:34.598 fused_ordering(762) 00:08:34.598 fused_ordering(763) 00:08:34.598 fused_ordering(764) 00:08:34.598 fused_ordering(765) 00:08:34.598 fused_ordering(766) 00:08:34.598 fused_ordering(767) 00:08:34.598 fused_ordering(768) 00:08:34.598 fused_ordering(769) 00:08:34.598 fused_ordering(770) 00:08:34.598 fused_ordering(771) 00:08:34.598 fused_ordering(772) 00:08:34.598 fused_ordering(773) 00:08:34.598 fused_ordering(774) 00:08:34.598 fused_ordering(775) 00:08:34.598 fused_ordering(776) 00:08:34.598 fused_ordering(777) 00:08:34.598 fused_ordering(778) 00:08:34.598 fused_ordering(779) 00:08:34.598 fused_ordering(780) 00:08:34.598 fused_ordering(781) 00:08:34.598 fused_ordering(782) 00:08:34.598 fused_ordering(783) 00:08:34.598 fused_ordering(784) 00:08:34.598 fused_ordering(785) 00:08:34.598 fused_ordering(786) 00:08:34.598 fused_ordering(787) 00:08:34.598 fused_ordering(788) 00:08:34.598 fused_ordering(789) 00:08:34.598 fused_ordering(790) 00:08:34.598 fused_ordering(791) 00:08:34.598 fused_ordering(792) 00:08:34.598 fused_ordering(793) 00:08:34.598 fused_ordering(794) 00:08:34.598 fused_ordering(795) 00:08:34.598 fused_ordering(796) 00:08:34.598 fused_ordering(797) 00:08:34.598 fused_ordering(798) 00:08:34.598 fused_ordering(799) 00:08:34.598 fused_ordering(800) 00:08:34.598 fused_ordering(801) 00:08:34.598 fused_ordering(802) 00:08:34.598 fused_ordering(803) 00:08:34.598 fused_ordering(804) 00:08:34.598 fused_ordering(805) 00:08:34.598 fused_ordering(806) 00:08:34.598 fused_ordering(807) 00:08:34.598 fused_ordering(808) 00:08:34.598 fused_ordering(809) 00:08:34.598 fused_ordering(810) 00:08:34.598 fused_ordering(811) 00:08:34.598 fused_ordering(812) 00:08:34.598 fused_ordering(813) 00:08:34.598 fused_ordering(814) 00:08:34.598 fused_ordering(815) 00:08:34.598 fused_ordering(816) 00:08:34.598 fused_ordering(817) 00:08:34.598 fused_ordering(818) 00:08:34.598 fused_ordering(819) 00:08:34.598 fused_ordering(820) 00:08:35.164 fused_ordering(821) 00:08:35.164 fused_ordering(822) 00:08:35.164 fused_ordering(823) 00:08:35.164 fused_ordering(824) 00:08:35.164 fused_ordering(825) 00:08:35.164 fused_ordering(826) 00:08:35.164 fused_ordering(827) 00:08:35.164 fused_ordering(828) 00:08:35.164 fused_ordering(829) 00:08:35.164 fused_ordering(830) 00:08:35.164 fused_ordering(831) 00:08:35.164 fused_ordering(832) 00:08:35.164 fused_ordering(833) 00:08:35.164 fused_ordering(834) 00:08:35.164 fused_ordering(835) 00:08:35.164 fused_ordering(836) 00:08:35.164 fused_ordering(837) 00:08:35.164 fused_ordering(838) 00:08:35.164 fused_ordering(839) 00:08:35.164 fused_ordering(840) 00:08:35.164 fused_ordering(841) 00:08:35.164 fused_ordering(842) 00:08:35.164 fused_ordering(843) 00:08:35.164 fused_ordering(844) 00:08:35.164 fused_ordering(845) 00:08:35.164 fused_ordering(846) 00:08:35.164 fused_ordering(847) 00:08:35.164 fused_ordering(848) 00:08:35.164 fused_ordering(849) 00:08:35.164 fused_ordering(850) 00:08:35.164 fused_ordering(851) 00:08:35.164 fused_ordering(852) 00:08:35.164 fused_ordering(853) 00:08:35.164 fused_ordering(854) 00:08:35.164 fused_ordering(855) 00:08:35.164 fused_ordering(856) 00:08:35.164 fused_ordering(857) 00:08:35.164 fused_ordering(858) 00:08:35.164 fused_ordering(859) 00:08:35.164 fused_ordering(860) 00:08:35.164 fused_ordering(861) 00:08:35.164 fused_ordering(862) 00:08:35.164 fused_ordering(863) 00:08:35.164 fused_ordering(864) 00:08:35.164 fused_ordering(865) 00:08:35.164 fused_ordering(866) 00:08:35.164 fused_ordering(867) 00:08:35.164 fused_ordering(868) 00:08:35.164 fused_ordering(869) 00:08:35.164 fused_ordering(870) 00:08:35.164 fused_ordering(871) 00:08:35.164 fused_ordering(872) 00:08:35.164 fused_ordering(873) 00:08:35.164 fused_ordering(874) 00:08:35.164 fused_ordering(875) 00:08:35.164 fused_ordering(876) 00:08:35.164 fused_ordering(877) 00:08:35.164 fused_ordering(878) 00:08:35.164 fused_ordering(879) 00:08:35.164 fused_ordering(880) 00:08:35.164 fused_ordering(881) 00:08:35.164 fused_ordering(882) 00:08:35.164 fused_ordering(883) 00:08:35.164 fused_ordering(884) 00:08:35.164 fused_ordering(885) 00:08:35.164 fused_ordering(886) 00:08:35.164 fused_ordering(887) 00:08:35.164 fused_ordering(888) 00:08:35.164 fused_ordering(889) 00:08:35.164 fused_ordering(890) 00:08:35.164 fused_ordering(891) 00:08:35.164 fused_ordering(892) 00:08:35.164 fused_ordering(893) 00:08:35.164 fused_ordering(894) 00:08:35.164 fused_ordering(895) 00:08:35.164 fused_ordering(896) 00:08:35.164 fused_ordering(897) 00:08:35.164 fused_ordering(898) 00:08:35.164 fused_ordering(899) 00:08:35.164 fused_ordering(900) 00:08:35.164 fused_ordering(901) 00:08:35.164 fused_ordering(902) 00:08:35.164 fused_ordering(903) 00:08:35.164 fused_ordering(904) 00:08:35.164 fused_ordering(905) 00:08:35.164 fused_ordering(906) 00:08:35.164 fused_ordering(907) 00:08:35.164 fused_ordering(908) 00:08:35.164 fused_ordering(909) 00:08:35.164 fused_ordering(910) 00:08:35.164 fused_ordering(911) 00:08:35.164 fused_ordering(912) 00:08:35.164 fused_ordering(913) 00:08:35.164 fused_ordering(914) 00:08:35.164 fused_ordering(915) 00:08:35.164 fused_ordering(916) 00:08:35.164 fused_ordering(917) 00:08:35.164 fused_ordering(918) 00:08:35.164 fused_ordering(919) 00:08:35.164 fused_ordering(920) 00:08:35.164 fused_ordering(921) 00:08:35.164 fused_ordering(922) 00:08:35.164 fused_ordering(923) 00:08:35.164 fused_ordering(924) 00:08:35.164 fused_ordering(925) 00:08:35.164 fused_ordering(926) 00:08:35.164 fused_ordering(927) 00:08:35.164 fused_ordering(928) 00:08:35.164 fused_ordering(929) 00:08:35.164 fused_ordering(930) 00:08:35.164 fused_ordering(931) 00:08:35.164 fused_ordering(932) 00:08:35.164 fused_ordering(933) 00:08:35.164 fused_ordering(934) 00:08:35.164 fused_ordering(935) 00:08:35.164 fused_ordering(936) 00:08:35.164 fused_ordering(937) 00:08:35.164 fused_ordering(938) 00:08:35.164 fused_ordering(939) 00:08:35.164 fused_ordering(940) 00:08:35.164 fused_ordering(941) 00:08:35.165 fused_ordering(942) 00:08:35.165 fused_ordering(943) 00:08:35.165 fused_ordering(944) 00:08:35.165 fused_ordering(945) 00:08:35.165 fused_ordering(946) 00:08:35.165 fused_ordering(947) 00:08:35.165 fused_ordering(948) 00:08:35.165 fused_ordering(949) 00:08:35.165 fused_ordering(950) 00:08:35.165 fused_ordering(951) 00:08:35.165 fused_ordering(952) 00:08:35.165 fused_ordering(953) 00:08:35.165 fused_ordering(954) 00:08:35.165 fused_ordering(955) 00:08:35.165 fused_ordering(956) 00:08:35.165 fused_ordering(957) 00:08:35.165 fused_ordering(958) 00:08:35.165 fused_ordering(959) 00:08:35.165 fused_ordering(960) 00:08:35.165 fused_ordering(961) 00:08:35.165 fused_ordering(962) 00:08:35.165 fused_ordering(963) 00:08:35.165 fused_ordering(964) 00:08:35.165 fused_ordering(965) 00:08:35.165 fused_ordering(966) 00:08:35.165 fused_ordering(967) 00:08:35.165 fused_ordering(968) 00:08:35.165 fused_ordering(969) 00:08:35.165 fused_ordering(970) 00:08:35.165 fused_ordering(971) 00:08:35.165 fused_ordering(972) 00:08:35.165 fused_ordering(973) 00:08:35.165 fused_ordering(974) 00:08:35.165 fused_ordering(975) 00:08:35.165 fused_ordering(976) 00:08:35.165 fused_ordering(977) 00:08:35.165 fused_ordering(978) 00:08:35.165 fused_ordering(979) 00:08:35.165 fused_ordering(980) 00:08:35.165 fused_ordering(981) 00:08:35.165 fused_ordering(982) 00:08:35.165 fused_ordering(983) 00:08:35.165 fused_ordering(984) 00:08:35.165 fused_ordering(985) 00:08:35.165 fused_ordering(986) 00:08:35.165 fused_ordering(987) 00:08:35.165 fused_ordering(988) 00:08:35.165 fused_ordering(989) 00:08:35.165 fused_ordering(990) 00:08:35.165 fused_ordering(991) 00:08:35.165 fused_ordering(992) 00:08:35.165 fused_ordering(993) 00:08:35.165 fused_ordering(994) 00:08:35.165 fused_ordering(995) 00:08:35.165 fused_ordering(996) 00:08:35.165 fused_ordering(997) 00:08:35.165 fused_ordering(998) 00:08:35.165 fused_ordering(999) 00:08:35.165 fused_ordering(1000) 00:08:35.165 fused_ordering(1001) 00:08:35.165 fused_ordering(1002) 00:08:35.165 fused_ordering(1003) 00:08:35.165 fused_ordering(1004) 00:08:35.165 fused_ordering(1005) 00:08:35.165 fused_ordering(1006) 00:08:35.165 fused_ordering(1007) 00:08:35.165 fused_ordering(1008) 00:08:35.165 fused_ordering(1009) 00:08:35.165 fused_ordering(1010) 00:08:35.165 fused_ordering(1011) 00:08:35.165 fused_ordering(1012) 00:08:35.165 fused_ordering(1013) 00:08:35.165 fused_ordering(1014) 00:08:35.165 fused_ordering(1015) 00:08:35.165 fused_ordering(1016) 00:08:35.165 fused_ordering(1017) 00:08:35.165 fused_ordering(1018) 00:08:35.165 fused_ordering(1019) 00:08:35.165 fused_ordering(1020) 00:08:35.165 fused_ordering(1021) 00:08:35.165 fused_ordering(1022) 00:08:35.165 fused_ordering(1023) 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:35.165 rmmod nvme_tcp 00:08:35.165 rmmod nvme_fabrics 00:08:35.165 rmmod nvme_keyring 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 71414 ']' 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 71414 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 71414 ']' 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 71414 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71414 00:08:35.165 killing process with pid 71414 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71414' 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 71414 00:08:35.165 19:24:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 71414 00:08:35.424 19:24:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:35.424 19:24:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:35.424 19:24:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:35.424 19:24:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.424 19:24:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:35.424 19:24:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.424 19:24:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.424 19:24:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.424 19:24:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:35.424 ************************************ 00:08:35.424 END TEST nvmf_fused_ordering 00:08:35.424 ************************************ 00:08:35.424 00:08:35.424 real 0m4.121s 00:08:35.424 user 0m5.098s 00:08:35.424 sys 0m1.277s 00:08:35.424 19:24:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.424 19:24:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:35.424 19:24:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:35.424 19:24:25 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:35.424 19:24:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:35.424 19:24:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.424 19:24:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:35.424 ************************************ 00:08:35.424 START TEST nvmf_delete_subsystem 00:08:35.424 ************************************ 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:35.424 * Looking for test storage... 00:08:35.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.424 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:35.425 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:35.682 Cannot find device "nvmf_tgt_br" 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:35.682 Cannot find device "nvmf_tgt_br2" 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:35.682 Cannot find device "nvmf_tgt_br" 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:35.682 Cannot find device "nvmf_tgt_br2" 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:35.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:35.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:35.682 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:35.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:08:35.941 00:08:35.941 --- 10.0.0.2 ping statistics --- 00:08:35.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.941 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:35.941 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:35.941 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:08:35.941 00:08:35.941 --- 10.0.0.3 ping statistics --- 00:08:35.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.941 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:35.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:35.941 00:08:35.941 --- 10.0.0.1 ping statistics --- 00:08:35.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.941 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=71673 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 71673 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 71673 ']' 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:35.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:35.941 19:24:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.941 [2024-07-15 19:24:25.629146] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:08:35.942 [2024-07-15 19:24:25.629274] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.200 [2024-07-15 19:24:25.776853] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:36.200 [2024-07-15 19:24:25.836330] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.200 [2024-07-15 19:24:25.836582] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.200 [2024-07-15 19:24:25.836719] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.200 [2024-07-15 19:24:25.836937] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.200 [2024-07-15 19:24:25.836982] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.200 [2024-07-15 19:24:25.837154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.200 [2024-07-15 19:24:25.837165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:37.135 [2024-07-15 19:24:26.649580] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:37.135 [2024-07-15 19:24:26.665748] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:37.135 NULL1 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:37.135 Delay0 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=71724 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:37.135 19:24:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:37.135 [2024-07-15 19:24:26.860441] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:39.070 19:24:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:39.071 19:24:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.071 19:24:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Write completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Write completed with error (sct=0, sc=8) 00:08:39.329 Write completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Write completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Write completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 [2024-07-15 19:24:28.896660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f899c00d4b0 is same with the state(5) to be set 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Write completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Write completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Write completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Write completed with error (sct=0, sc=8) 00:08:39.329 Write completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Write completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Write completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 starting I/O failed: -6 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 [2024-07-15 19:24:28.897853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20edbd0 is same with the state(5) to be set 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Write completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Write completed with error (sct=0, sc=8) 00:08:39.329 Write completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Write completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.329 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Write completed with error (sct=0, sc=8) 00:08:39.330 Write completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Write completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Write completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Write completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Write completed with error (sct=0, sc=8) 00:08:39.330 Write completed with error (sct=0, sc=8) 00:08:39.330 Write completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Write completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Write completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Write completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Write completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Write completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Write completed with error (sct=0, sc=8) 00:08:39.330 Write completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Write completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Write completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Write completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Write completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 Read completed with error (sct=0, sc=8) 00:08:39.330 [2024-07-15 19:24:28.899035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f899c000c00 is same with the state(5) to be set 00:08:40.263 [2024-07-15 19:24:29.875210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cb510 is same with the state(5) to be set 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 [2024-07-15 19:24:29.894288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed880 is same with the state(5) to be set 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 [2024-07-15 19:24:29.894568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eaab0 is same with the state(5) to be set 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 [2024-07-15 19:24:29.898649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f899c00d020 is same with the state(5) to be set 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Read completed with error (sct=0, sc=8) 00:08:40.263 Write completed with error (sct=0, sc=8) 00:08:40.263 [2024-07-15 19:24:29.898815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f899c00d800 is same with the state(5) to be set 00:08:40.263 Initializing NVMe Controllers 00:08:40.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:40.263 Controller IO queue size 128, less than required. 00:08:40.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:40.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:40.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:40.263 Initialization complete. Launching workers. 00:08:40.263 ======================================================== 00:08:40.263 Latency(us) 00:08:40.263 Device Information : IOPS MiB/s Average min max 00:08:40.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.14 0.08 889982.62 468.23 1012417.56 00:08:40.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.27 0.08 929900.26 717.86 1012182.34 00:08:40.264 ======================================================== 00:08:40.264 Total : 327.42 0.16 908913.25 468.23 1012417.56 00:08:40.264 00:08:40.264 [2024-07-15 19:24:29.899485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cb510 (9): Bad file descriptor 00:08:40.264 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:40.264 19:24:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.264 19:24:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:40.264 19:24:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71724 00:08:40.264 19:24:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71724 00:08:40.831 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (71724) - No such process 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 71724 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 71724 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 71724 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.831 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.831 [2024-07-15 19:24:30.422553] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.832 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.832 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.832 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.832 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.832 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.832 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=71769 00:08:40.832 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:40.832 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:40.832 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71769 00:08:40.832 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:40.832 [2024-07-15 19:24:30.604524] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:41.398 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:41.398 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71769 00:08:41.398 19:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:41.656 19:24:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:41.656 19:24:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71769 00:08:41.656 19:24:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:42.220 19:24:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:42.220 19:24:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71769 00:08:42.220 19:24:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:42.785 19:24:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:42.785 19:24:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71769 00:08:42.785 19:24:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:43.349 19:24:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:43.349 19:24:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71769 00:08:43.349 19:24:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:43.918 19:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:43.918 19:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71769 00:08:43.918 19:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:43.918 Initializing NVMe Controllers 00:08:43.918 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:43.918 Controller IO queue size 128, less than required. 00:08:43.918 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:43.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:43.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:43.918 Initialization complete. Launching workers. 00:08:43.918 ======================================================== 00:08:43.918 Latency(us) 00:08:43.918 Device Information : IOPS MiB/s Average min max 00:08:43.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003582.35 1000146.13 1041653.26 00:08:43.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005912.80 1000173.65 1042484.84 00:08:43.918 ======================================================== 00:08:43.918 Total : 256.00 0.12 1004747.57 1000146.13 1042484.84 00:08:43.918 00:08:44.176 19:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:44.177 19:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71769 00:08:44.177 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (71769) - No such process 00:08:44.177 19:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 71769 00:08:44.177 19:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:44.177 19:24:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:44.177 19:24:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:44.177 19:24:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:44.434 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:44.434 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:44.434 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:44.434 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:44.434 rmmod nvme_tcp 00:08:44.434 rmmod nvme_fabrics 00:08:44.434 rmmod nvme_keyring 00:08:44.434 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:44.434 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:44.434 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:44.434 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 71673 ']' 00:08:44.434 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 71673 00:08:44.434 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 71673 ']' 00:08:44.434 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 71673 00:08:44.434 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:08:44.434 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:44.434 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71673 00:08:44.434 killing process with pid 71673 00:08:44.434 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:44.434 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:44.434 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71673' 00:08:44.434 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 71673 00:08:44.434 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 71673 00:08:44.691 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:44.691 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:44.691 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:44.691 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:44.691 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:44.691 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.691 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.691 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.691 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:44.691 00:08:44.691 real 0m9.158s 00:08:44.691 user 0m28.595s 00:08:44.691 sys 0m1.479s 00:08:44.691 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.691 ************************************ 00:08:44.691 END TEST nvmf_delete_subsystem 00:08:44.691 19:24:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.691 ************************************ 00:08:44.692 19:24:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:44.692 19:24:34 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:08:44.692 19:24:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:44.692 19:24:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.692 19:24:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:44.692 ************************************ 00:08:44.692 START TEST nvmf_ns_masking 00:08:44.692 ************************************ 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:08:44.692 * Looking for test storage... 00:08:44.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d65d0e0f-aee6-46a2-bc3c-af6cf20ee50b 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=af4a4b8d-2785-4c2d-bc86-0fda8a1462b8 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ace11f18-c4e4-4b47-b23a-32aaf0a41ce2 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:44.692 Cannot find device "nvmf_tgt_br" 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:44.692 Cannot find device "nvmf_tgt_br2" 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:08:44.692 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:44.949 Cannot find device "nvmf_tgt_br" 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:44.949 Cannot find device "nvmf_tgt_br2" 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:44.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:44.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:44.949 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:45.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:08:45.208 00:08:45.208 --- 10.0.0.2 ping statistics --- 00:08:45.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.208 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:45.208 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:45.208 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:08:45.208 00:08:45.208 --- 10.0.0.3 ping statistics --- 00:08:45.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.208 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:45.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:45.208 00:08:45.208 --- 10.0.0.1 ping statistics --- 00:08:45.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.208 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=72015 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 72015 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72015 ']' 00:08:45.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.208 19:24:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:45.208 [2024-07-15 19:24:34.875950] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:08:45.208 [2024-07-15 19:24:34.876045] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.466 [2024-07-15 19:24:35.010828] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.466 [2024-07-15 19:24:35.069077] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.466 [2024-07-15 19:24:35.069512] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.466 [2024-07-15 19:24:35.069629] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.466 [2024-07-15 19:24:35.069743] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.466 [2024-07-15 19:24:35.069812] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.466 [2024-07-15 19:24:35.069904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.399 19:24:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.399 19:24:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:08:46.399 19:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:46.399 19:24:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:46.399 19:24:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:46.399 19:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.399 19:24:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:46.399 [2024-07-15 19:24:36.184060] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.656 19:24:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:08:46.656 19:24:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:08:46.656 19:24:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:08:46.913 Malloc1 00:08:46.913 19:24:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:08:47.252 Malloc2 00:08:47.252 19:24:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:47.252 19:24:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:08:47.510 19:24:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.075 [2024-07-15 19:24:37.617056] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.075 19:24:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:08:48.075 19:24:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ace11f18-c4e4-4b47-b23a-32aaf0a41ce2 -a 10.0.0.2 -s 4420 -i 4 00:08:48.075 19:24:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:08:48.075 19:24:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:08:48.075 19:24:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:48.075 19:24:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:48.075 19:24:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:08:49.974 19:24:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:49.974 19:24:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:49.974 19:24:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:49.974 19:24:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:49.974 19:24:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:49.974 19:24:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:08:49.974 19:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:08:49.974 19:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:08:50.231 19:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:08:50.231 19:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:08:50.231 19:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:08:50.231 19:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:50.231 19:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:50.231 [ 0]:0x1 00:08:50.231 19:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:50.232 19:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:50.232 19:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f0bb6924cc6b4a72a56fd8bb19d1843f 00:08:50.232 19:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f0bb6924cc6b4a72a56fd8bb19d1843f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:50.232 19:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:08:50.489 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:08:50.489 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:50.489 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:50.489 [ 0]:0x1 00:08:50.489 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:50.489 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:50.489 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f0bb6924cc6b4a72a56fd8bb19d1843f 00:08:50.489 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f0bb6924cc6b4a72a56fd8bb19d1843f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:50.489 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:08:50.489 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:50.489 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:50.489 [ 1]:0x2 00:08:50.489 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:50.489 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:50.747 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=417bd2a4391e4b959b169c4208f94472 00:08:50.747 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 417bd2a4391e4b959b169c4208f94472 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:50.747 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:08:50.747 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:50.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.747 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.005 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:08:51.263 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:08:51.263 19:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ace11f18-c4e4-4b47-b23a-32aaf0a41ce2 -a 10.0.0.2 -s 4420 -i 4 00:08:51.520 19:24:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:08:51.520 19:24:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:08:51.520 19:24:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:51.520 19:24:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:08:51.520 19:24:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:08:51.520 19:24:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:53.418 [ 0]:0x2 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:53.418 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:53.675 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=417bd2a4391e4b959b169c4208f94472 00:08:53.675 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 417bd2a4391e4b959b169c4208f94472 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:53.675 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:53.933 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:08:53.933 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:53.933 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:53.933 [ 0]:0x1 00:08:53.933 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:53.933 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:53.933 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f0bb6924cc6b4a72a56fd8bb19d1843f 00:08:53.933 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f0bb6924cc6b4a72a56fd8bb19d1843f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:53.933 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:08:53.933 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:53.933 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:53.933 [ 1]:0x2 00:08:53.933 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:53.933 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:53.933 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=417bd2a4391e4b959b169c4208f94472 00:08:53.933 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 417bd2a4391e4b959b169c4208f94472 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:53.933 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:54.197 [ 0]:0x2 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:54.197 19:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:54.454 19:24:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=417bd2a4391e4b959b169c4208f94472 00:08:54.454 19:24:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 417bd2a4391e4b959b169c4208f94472 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:54.454 19:24:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:08:54.454 19:24:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:54.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.454 19:24:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:54.713 19:24:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:08:54.713 19:24:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ace11f18-c4e4-4b47-b23a-32aaf0a41ce2 -a 10.0.0.2 -s 4420 -i 4 00:08:54.713 19:24:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:08:54.713 19:24:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:08:54.713 19:24:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.713 19:24:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:08:54.713 19:24:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:08:54.713 19:24:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:57.253 [ 0]:0x1 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f0bb6924cc6b4a72a56fd8bb19d1843f 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f0bb6924cc6b4a72a56fd8bb19d1843f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:57.253 [ 1]:0x2 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=417bd2a4391e4b959b169c4208f94472 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 417bd2a4391e4b959b169c4208f94472 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:57.253 19:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:57.253 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:08:57.253 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:57.253 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:57.253 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:57.253 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:57.253 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:57.253 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:08:57.253 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:57.253 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:57.253 [ 0]:0x2 00:08:57.253 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:57.253 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:57.510 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=417bd2a4391e4b959b169c4208f94472 00:08:57.510 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 417bd2a4391e4b959b169c4208f94472 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:57.510 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:08:57.510 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:57.510 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:08:57.510 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:57.510 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:57.510 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:57.510 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:57.510 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:57.510 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:57.511 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:57.511 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:57.511 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:08:57.768 [2024-07-15 19:24:47.327674] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:08:57.768 2024/07/15 19:24:47 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:08:57.768 request: 00:08:57.768 { 00:08:57.768 "method": "nvmf_ns_remove_host", 00:08:57.768 "params": { 00:08:57.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.768 "nsid": 2, 00:08:57.768 "host": "nqn.2016-06.io.spdk:host1" 00:08:57.768 } 00:08:57.768 } 00:08:57.768 Got JSON-RPC error response 00:08:57.768 GoRPCClient: error on JSON-RPC call 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:57.768 [ 0]:0x2 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:57.768 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=417bd2a4391e4b959b169c4208f94472 00:08:57.769 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 417bd2a4391e4b959b169c4208f94472 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:57.769 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:08:57.769 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:57.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.769 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=72392 00:08:57.769 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:08:57.769 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.769 19:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 72392 /var/tmp/host.sock 00:08:57.769 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72392 ']' 00:08:57.769 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:08:57.769 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:57.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:08:57.769 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:08:57.769 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:57.769 19:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:57.769 [2024-07-15 19:24:47.559096] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:08:57.769 [2024-07-15 19:24:47.559184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72392 ] 00:08:58.027 [2024-07-15 19:24:47.690273] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.027 [2024-07-15 19:24:47.779499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.958 19:24:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.958 19:24:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:08:58.958 19:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.216 19:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:59.474 19:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d65d0e0f-aee6-46a2-bc3c-af6cf20ee50b 00:08:59.474 19:24:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:08:59.474 19:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D65D0E0FAEE646A2BC3CAF6CF20EE50B -i 00:08:59.733 19:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid af4a4b8d-2785-4c2d-bc86-0fda8a1462b8 00:08:59.733 19:24:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:08:59.733 19:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g AF4A4B8D27854C2DBC860FDA8A1462B8 -i 00:08:59.991 19:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:00.249 19:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:00.507 19:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:00.507 19:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:00.764 nvme0n1 00:09:00.764 19:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:00.764 19:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:01.329 nvme1n2 00:09:01.329 19:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:01.329 19:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:01.329 19:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:01.329 19:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:01.329 19:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:01.329 19:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:01.329 19:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:01.329 19:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:01.329 19:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:01.897 19:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d65d0e0f-aee6-46a2-bc3c-af6cf20ee50b == \d\6\5\d\0\e\0\f\-\a\e\e\6\-\4\6\a\2\-\b\c\3\c\-\a\f\6\c\f\2\0\e\e\5\0\b ]] 00:09:01.897 19:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:01.897 19:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:01.897 19:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:01.897 19:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ af4a4b8d-2785-4c2d-bc86-0fda8a1462b8 == \a\f\4\a\4\b\8\d\-\2\7\8\5\-\4\c\2\d\-\b\c\8\6\-\0\f\d\a\8\a\1\4\6\2\b\8 ]] 00:09:01.897 19:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 72392 00:09:01.897 19:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72392 ']' 00:09:01.897 19:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72392 00:09:01.897 19:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:01.897 19:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:01.897 19:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72392 00:09:02.155 19:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:02.155 19:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:02.155 killing process with pid 72392 00:09:02.155 19:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72392' 00:09:02.155 19:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72392 00:09:02.155 19:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72392 00:09:02.413 19:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:02.413 19:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:02.413 19:24:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:02.413 19:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:02.413 19:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:02.671 19:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:02.671 19:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:02.671 19:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:02.671 19:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:02.671 rmmod nvme_tcp 00:09:02.671 rmmod nvme_fabrics 00:09:02.671 rmmod nvme_keyring 00:09:02.671 19:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:02.671 19:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:02.671 19:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:02.671 19:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 72015 ']' 00:09:02.671 19:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 72015 00:09:02.671 19:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72015 ']' 00:09:02.671 19:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72015 00:09:02.671 19:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:02.671 19:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:02.671 19:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72015 00:09:02.671 killing process with pid 72015 00:09:02.671 19:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:02.671 19:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:02.671 19:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72015' 00:09:02.671 19:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72015 00:09:02.671 19:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72015 00:09:02.929 19:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:02.929 19:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:02.929 19:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:02.929 19:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:02.929 19:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:02.929 19:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.929 19:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.929 19:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.929 19:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:02.929 00:09:02.929 real 0m18.200s 00:09:02.929 user 0m29.366s 00:09:02.929 sys 0m2.521s 00:09:02.929 19:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.929 19:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:02.929 ************************************ 00:09:02.929 END TEST nvmf_ns_masking 00:09:02.929 ************************************ 00:09:02.929 19:24:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:02.929 19:24:52 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:09:02.929 19:24:52 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:09:02.929 19:24:52 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:02.929 19:24:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:02.929 19:24:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.929 19:24:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:02.929 ************************************ 00:09:02.929 START TEST nvmf_host_management 00:09:02.929 ************************************ 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:02.929 * Looking for test storage... 00:09:02.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:02.929 Cannot find device "nvmf_tgt_br" 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:09:02.929 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:03.186 Cannot find device "nvmf_tgt_br2" 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:03.186 Cannot find device "nvmf_tgt_br" 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:03.186 Cannot find device "nvmf_tgt_br2" 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:03.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:03.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:03.186 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:03.444 19:24:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:03.444 19:24:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:03.444 19:24:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:03.444 19:24:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:03.444 19:24:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:03.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:09:03.444 00:09:03.444 --- 10.0.0.2 ping statistics --- 00:09:03.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.444 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:03.444 19:24:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:03.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:03.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:09:03.444 00:09:03.444 --- 10.0.0.3 ping statistics --- 00:09:03.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.445 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:03.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:03.445 00:09:03.445 --- 10.0.0.1 ping statistics --- 00:09:03.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.445 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=72756 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 72756 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72756 ']' 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:03.445 19:24:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.445 [2024-07-15 19:24:53.135676] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:09:03.445 [2024-07-15 19:24:53.135783] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.703 [2024-07-15 19:24:53.277253] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.703 [2024-07-15 19:24:53.348401] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.703 [2024-07-15 19:24:53.348702] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.703 [2024-07-15 19:24:53.348815] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.703 [2024-07-15 19:24:53.348924] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.703 [2024-07-15 19:24:53.349018] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.703 [2024-07-15 19:24:53.349230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.703 [2024-07-15 19:24:53.349376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.703 [2024-07-15 19:24:53.349564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:03.703 [2024-07-15 19:24:53.349572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.632 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.632 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:04.632 19:24:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:04.632 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.632 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.632 19:24:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.632 19:24:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.633 [2024-07-15 19:24:54.127822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.633 Malloc0 00:09:04.633 [2024-07-15 19:24:54.198706] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=72829 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 72829 /var/tmp/bdevperf.sock 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72829 ']' 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:04.633 { 00:09:04.633 "params": { 00:09:04.633 "name": "Nvme$subsystem", 00:09:04.633 "trtype": "$TEST_TRANSPORT", 00:09:04.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:04.633 "adrfam": "ipv4", 00:09:04.633 "trsvcid": "$NVMF_PORT", 00:09:04.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:04.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:04.633 "hdgst": ${hdgst:-false}, 00:09:04.633 "ddgst": ${ddgst:-false} 00:09:04.633 }, 00:09:04.633 "method": "bdev_nvme_attach_controller" 00:09:04.633 } 00:09:04.633 EOF 00:09:04.633 )") 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:04.633 19:24:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:04.633 "params": { 00:09:04.633 "name": "Nvme0", 00:09:04.633 "trtype": "tcp", 00:09:04.633 "traddr": "10.0.0.2", 00:09:04.633 "adrfam": "ipv4", 00:09:04.633 "trsvcid": "4420", 00:09:04.633 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:04.633 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:04.633 "hdgst": false, 00:09:04.633 "ddgst": false 00:09:04.633 }, 00:09:04.633 "method": "bdev_nvme_attach_controller" 00:09:04.633 }' 00:09:04.633 [2024-07-15 19:24:54.300734] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:09:04.633 [2024-07-15 19:24:54.301268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72829 ] 00:09:04.890 [2024-07-15 19:24:54.440643] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.890 [2024-07-15 19:24:54.511379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.890 Running I/O for 10 seconds... 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.823 19:24:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.823 [2024-07-15 19:24:55.337929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.337994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.823 [2024-07-15 19:24:55.338560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.823 [2024-07-15 19:24:55.338570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.338582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.338601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.338614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.338624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.338635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.338647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.338660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.338670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.338681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.338691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.338703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.338712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.338724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.338734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.338745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.338755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.338767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.338777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.338788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.338798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.338810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.338819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.338831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.338841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.338852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.338862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.338873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.338884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.338896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.338905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.338917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.338927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.338938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.338948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.338959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.338969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.338981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.338993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.824 [2024-07-15 19:24:55.339460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.824 [2024-07-15 19:24:55.339471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf9c0 is same with the state(5) to be set 00:09:05.824 [2024-07-15 19:24:55.339520] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19cf9c0 was disconnected and freed. reset controller. 00:09:05.824 [2024-07-15 19:24:55.340694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:05.824 19:24:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.825 19:24:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:05.825 19:24:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.825 19:24:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.825 task offset: 130944 on job bdev=Nvme0n1 fails 00:09:05.825 00:09:05.825 Latency(us) 00:09:05.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.825 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:05.825 Job: Nvme0n1 ended in about 0.68 seconds with error 00:09:05.825 Verification LBA range: start 0x0 length 0x400 00:09:05.825 Nvme0n1 : 0.68 1418.29 88.64 94.55 0.00 41084.87 5928.03 41466.41 00:09:05.825 =================================================================================================================== 00:09:05.825 Total : 1418.29 88.64 94.55 0.00 41084.87 5928.03 41466.41 00:09:05.825 [2024-07-15 19:24:55.342780] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:05.825 [2024-07-15 19:24:55.342811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cfc90 (9): Bad file descriptor 00:09:05.825 [2024-07-15 19:24:55.351184] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:05.825 19:24:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.825 19:24:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:06.757 19:24:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 72829 00:09:06.757 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72829) - No such process 00:09:06.757 19:24:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:06.757 19:24:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:06.757 19:24:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:06.757 19:24:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:06.757 19:24:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:06.757 19:24:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:06.757 19:24:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:06.757 19:24:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:06.757 { 00:09:06.757 "params": { 00:09:06.757 "name": "Nvme$subsystem", 00:09:06.757 "trtype": "$TEST_TRANSPORT", 00:09:06.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.757 "adrfam": "ipv4", 00:09:06.757 "trsvcid": "$NVMF_PORT", 00:09:06.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.757 "hdgst": ${hdgst:-false}, 00:09:06.757 "ddgst": ${ddgst:-false} 00:09:06.757 }, 00:09:06.757 "method": "bdev_nvme_attach_controller" 00:09:06.757 } 00:09:06.757 EOF 00:09:06.757 )") 00:09:06.757 19:24:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:06.757 19:24:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:06.757 19:24:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:06.757 19:24:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:06.757 "params": { 00:09:06.757 "name": "Nvme0", 00:09:06.757 "trtype": "tcp", 00:09:06.757 "traddr": "10.0.0.2", 00:09:06.757 "adrfam": "ipv4", 00:09:06.757 "trsvcid": "4420", 00:09:06.757 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:06.757 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:06.757 "hdgst": false, 00:09:06.757 "ddgst": false 00:09:06.757 }, 00:09:06.757 "method": "bdev_nvme_attach_controller" 00:09:06.757 }' 00:09:06.757 [2024-07-15 19:24:56.401115] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:09:06.757 [2024-07-15 19:24:56.401199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72878 ] 00:09:06.757 [2024-07-15 19:24:56.538314] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.014 [2024-07-15 19:24:56.606985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.014 Running I/O for 1 seconds... 00:09:08.385 00:09:08.385 Latency(us) 00:09:08.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.385 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:08.385 Verification LBA range: start 0x0 length 0x400 00:09:08.385 Nvme0n1 : 1.04 1538.59 96.16 0.00 0.00 40669.34 5034.36 43134.60 00:09:08.385 =================================================================================================================== 00:09:08.385 Total : 1538.59 96.16 0.00 0.00 40669.34 5034.36 43134.60 00:09:08.385 19:24:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:08.385 19:24:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:08.385 19:24:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:08.385 19:24:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:08.385 19:24:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:08.385 19:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:08.385 19:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:09:08.385 19:24:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:08.385 19:24:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:09:08.385 19:24:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:08.385 19:24:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:08.385 rmmod nvme_tcp 00:09:08.385 rmmod nvme_fabrics 00:09:08.385 rmmod nvme_keyring 00:09:08.385 19:24:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:08.385 19:24:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:09:08.385 19:24:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:09:08.385 19:24:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 72756 ']' 00:09:08.385 19:24:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 72756 00:09:08.386 19:24:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 72756 ']' 00:09:08.386 19:24:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 72756 00:09:08.386 19:24:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:09:08.386 19:24:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:08.386 19:24:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72756 00:09:08.386 19:24:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:08.386 19:24:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:08.386 19:24:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72756' 00:09:08.386 killing process with pid 72756 00:09:08.386 19:24:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 72756 00:09:08.386 19:24:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 72756 00:09:08.644 [2024-07-15 19:24:58.217534] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:08.644 19:24:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:08.644 19:24:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:08.644 19:24:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:08.644 19:24:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:08.644 19:24:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:08.644 19:24:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.644 19:24:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.644 19:24:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.644 19:24:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:08.644 19:24:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:08.644 00:09:08.644 real 0m5.696s 00:09:08.644 user 0m22.202s 00:09:08.644 sys 0m1.212s 00:09:08.644 19:24:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.644 19:24:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:08.644 ************************************ 00:09:08.644 END TEST nvmf_host_management 00:09:08.644 ************************************ 00:09:08.644 19:24:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:08.644 19:24:58 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:08.644 19:24:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:08.644 19:24:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.644 19:24:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:08.644 ************************************ 00:09:08.644 START TEST nvmf_lvol 00:09:08.644 ************************************ 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:08.644 * Looking for test storage... 00:09:08.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:08.644 19:24:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:08.645 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:08.902 Cannot find device "nvmf_tgt_br" 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:08.902 Cannot find device "nvmf_tgt_br2" 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:08.902 Cannot find device "nvmf_tgt_br" 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:08.902 Cannot find device "nvmf_tgt_br2" 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:08.902 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:08.902 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:08.902 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:09.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:09:09.159 00:09:09.159 --- 10.0.0.2 ping statistics --- 00:09:09.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.159 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:09.159 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:09.159 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:09.159 00:09:09.159 --- 10.0.0.3 ping statistics --- 00:09:09.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.159 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:09.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:09.159 00:09:09.159 --- 10.0.0.1 ping statistics --- 00:09:09.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.159 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=73090 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 73090 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 73090 ']' 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:09.159 19:24:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:09.159 [2024-07-15 19:24:58.851378] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:09:09.159 [2024-07-15 19:24:58.851483] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.416 [2024-07-15 19:24:58.990534] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:09.416 [2024-07-15 19:24:59.050514] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.416 [2024-07-15 19:24:59.050571] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.417 [2024-07-15 19:24:59.050583] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.417 [2024-07-15 19:24:59.050591] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.417 [2024-07-15 19:24:59.050599] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.417 [2024-07-15 19:24:59.050699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.417 [2024-07-15 19:24:59.050769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.417 [2024-07-15 19:24:59.050881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.376 19:24:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.376 19:24:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:09:10.376 19:24:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:10.376 19:24:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:10.376 19:24:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:10.376 19:24:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.376 19:24:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:10.376 [2024-07-15 19:25:00.109104] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.376 19:25:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:10.940 19:25:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:10.940 19:25:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:10.940 19:25:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:10.940 19:25:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:11.198 19:25:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:11.455 19:25:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=37b64e19-16a4-4346-915b-1089188b2059 00:09:11.455 19:25:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 37b64e19-16a4-4346-915b-1089188b2059 lvol 20 00:09:12.020 19:25:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=992fb5da-bf0c-4ca3-bfbf-b020fabd75d7 00:09:12.020 19:25:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:12.020 19:25:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 992fb5da-bf0c-4ca3-bfbf-b020fabd75d7 00:09:12.277 19:25:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:12.534 [2024-07-15 19:25:02.265532] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.534 19:25:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:12.791 19:25:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=73238 00:09:12.791 19:25:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:12.791 19:25:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:14.162 19:25:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 992fb5da-bf0c-4ca3-bfbf-b020fabd75d7 MY_SNAPSHOT 00:09:14.162 19:25:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6103eebf-6015-4f43-8c27-7e9f57d7cfb2 00:09:14.162 19:25:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 992fb5da-bf0c-4ca3-bfbf-b020fabd75d7 30 00:09:14.727 19:25:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 6103eebf-6015-4f43-8c27-7e9f57d7cfb2 MY_CLONE 00:09:14.727 19:25:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a0d9b32a-ff88-453c-b0f1-e350ef371c1b 00:09:14.727 19:25:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate a0d9b32a-ff88-453c-b0f1-e350ef371c1b 00:09:15.657 19:25:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 73238 00:09:23.833 Initializing NVMe Controllers 00:09:23.833 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:23.833 Controller IO queue size 128, less than required. 00:09:23.833 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:23.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:23.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:23.833 Initialization complete. Launching workers. 00:09:23.833 ======================================================== 00:09:23.833 Latency(us) 00:09:23.833 Device Information : IOPS MiB/s Average min max 00:09:23.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10234.00 39.98 12508.02 2702.36 51661.33 00:09:23.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10140.70 39.61 12621.48 2762.50 64733.78 00:09:23.833 ======================================================== 00:09:23.833 Total : 20374.70 79.59 12564.49 2702.36 64733.78 00:09:23.833 00:09:23.833 19:25:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:23.833 19:25:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 992fb5da-bf0c-4ca3-bfbf-b020fabd75d7 00:09:23.833 19:25:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 37b64e19-16a4-4346-915b-1089188b2059 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:24.091 rmmod nvme_tcp 00:09:24.091 rmmod nvme_fabrics 00:09:24.091 rmmod nvme_keyring 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 73090 ']' 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 73090 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 73090 ']' 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 73090 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73090 00:09:24.091 killing process with pid 73090 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73090' 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 73090 00:09:24.091 19:25:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 73090 00:09:24.349 19:25:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:24.349 19:25:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:24.349 19:25:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:24.349 19:25:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:24.349 19:25:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:24.349 19:25:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.349 19:25:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.349 19:25:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.349 19:25:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:24.349 ************************************ 00:09:24.349 END TEST nvmf_lvol 00:09:24.349 ************************************ 00:09:24.349 00:09:24.349 real 0m15.654s 00:09:24.349 user 1m5.856s 00:09:24.349 sys 0m3.656s 00:09:24.349 19:25:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.349 19:25:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:24.349 19:25:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:24.350 19:25:14 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:24.350 19:25:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:24.350 19:25:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.350 19:25:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:24.350 ************************************ 00:09:24.350 START TEST nvmf_lvs_grow 00:09:24.350 ************************************ 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:24.350 * Looking for test storage... 00:09:24.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:24.350 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:24.608 Cannot find device "nvmf_tgt_br" 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:24.608 Cannot find device "nvmf_tgt_br2" 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:24.608 Cannot find device "nvmf_tgt_br" 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:24.608 Cannot find device "nvmf_tgt_br2" 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:24.608 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:24.608 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:24.608 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:24.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:09:24.866 00:09:24.866 --- 10.0.0.2 ping statistics --- 00:09:24.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.866 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:24.866 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:24.866 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:09:24.866 00:09:24.866 --- 10.0.0.3 ping statistics --- 00:09:24.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.866 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:24.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:24.866 00:09:24.866 --- 10.0.0.1 ping statistics --- 00:09:24.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.866 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:24.866 19:25:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.867 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:24.867 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=73599 00:09:24.867 19:25:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 73599 00:09:24.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.867 19:25:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 73599 ']' 00:09:24.867 19:25:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.867 19:25:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:24.867 19:25:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.867 19:25:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:24.867 19:25:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.867 [2024-07-15 19:25:14.547836] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:09:24.867 [2024-07-15 19:25:14.548128] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.125 [2024-07-15 19:25:14.685331] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.125 [2024-07-15 19:25:14.755089] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.125 [2024-07-15 19:25:14.755151] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.125 [2024-07-15 19:25:14.755166] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.125 [2024-07-15 19:25:14.755176] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.125 [2024-07-15 19:25:14.755185] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.125 [2024-07-15 19:25:14.755213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.059 19:25:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.059 19:25:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:09:26.059 19:25:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:26.059 19:25:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:26.059 19:25:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:26.059 19:25:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.059 19:25:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:26.318 [2024-07-15 19:25:15.865909] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.318 19:25:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:26.318 19:25:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:26.318 19:25:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.318 19:25:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:26.318 ************************************ 00:09:26.318 START TEST lvs_grow_clean 00:09:26.318 ************************************ 00:09:26.318 19:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:09:26.318 19:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:26.318 19:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:26.318 19:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:26.318 19:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:26.318 19:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:26.318 19:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:26.318 19:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:26.318 19:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:26.318 19:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:26.576 19:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:26.576 19:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:26.834 19:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=13b02d9d-e3be-4547-9de2-be636dc6b745 00:09:26.834 19:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13b02d9d-e3be-4547-9de2-be636dc6b745 00:09:26.834 19:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:27.092 19:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:27.092 19:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:27.092 19:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 13b02d9d-e3be-4547-9de2-be636dc6b745 lvol 150 00:09:27.350 19:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0a811748-548b-49e9-bb3a-b6d9fbbf8837 00:09:27.350 19:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:27.350 19:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:27.609 [2024-07-15 19:25:17.330290] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:27.609 [2024-07-15 19:25:17.330430] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:27.609 true 00:09:27.609 19:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13b02d9d-e3be-4547-9de2-be636dc6b745 00:09:27.609 19:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:27.867 19:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:27.867 19:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:28.433 19:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0a811748-548b-49e9-bb3a-b6d9fbbf8837 00:09:28.434 19:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:28.691 [2024-07-15 19:25:18.450950] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.691 19:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:28.950 19:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73766 00:09:28.950 19:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:28.950 19:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:28.950 19:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73766 /var/tmp/bdevperf.sock 00:09:28.950 19:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 73766 ']' 00:09:28.950 19:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:28.950 19:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:28.950 19:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:28.950 19:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.950 19:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:29.208 [2024-07-15 19:25:18.777543] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:09:29.208 [2024-07-15 19:25:18.777659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73766 ] 00:09:29.208 [2024-07-15 19:25:18.912045] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.208 [2024-07-15 19:25:18.974233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.142 19:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.142 19:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:09:30.142 19:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:30.400 Nvme0n1 00:09:30.400 19:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:30.659 [ 00:09:30.659 { 00:09:30.659 "aliases": [ 00:09:30.659 "0a811748-548b-49e9-bb3a-b6d9fbbf8837" 00:09:30.659 ], 00:09:30.659 "assigned_rate_limits": { 00:09:30.659 "r_mbytes_per_sec": 0, 00:09:30.659 "rw_ios_per_sec": 0, 00:09:30.659 "rw_mbytes_per_sec": 0, 00:09:30.659 "w_mbytes_per_sec": 0 00:09:30.659 }, 00:09:30.659 "block_size": 4096, 00:09:30.659 "claimed": false, 00:09:30.659 "driver_specific": { 00:09:30.659 "mp_policy": "active_passive", 00:09:30.659 "nvme": [ 00:09:30.659 { 00:09:30.659 "ctrlr_data": { 00:09:30.659 "ana_reporting": false, 00:09:30.659 "cntlid": 1, 00:09:30.659 "firmware_revision": "24.09", 00:09:30.659 "model_number": "SPDK bdev Controller", 00:09:30.659 "multi_ctrlr": true, 00:09:30.659 "oacs": { 00:09:30.659 "firmware": 0, 00:09:30.659 "format": 0, 00:09:30.659 "ns_manage": 0, 00:09:30.659 "security": 0 00:09:30.659 }, 00:09:30.659 "serial_number": "SPDK0", 00:09:30.659 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:30.659 "vendor_id": "0x8086" 00:09:30.659 }, 00:09:30.659 "ns_data": { 00:09:30.659 "can_share": true, 00:09:30.659 "id": 1 00:09:30.659 }, 00:09:30.659 "trid": { 00:09:30.659 "adrfam": "IPv4", 00:09:30.659 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:30.659 "traddr": "10.0.0.2", 00:09:30.659 "trsvcid": "4420", 00:09:30.659 "trtype": "TCP" 00:09:30.659 }, 00:09:30.659 "vs": { 00:09:30.659 "nvme_version": "1.3" 00:09:30.659 } 00:09:30.659 } 00:09:30.659 ] 00:09:30.659 }, 00:09:30.659 "memory_domains": [ 00:09:30.659 { 00:09:30.659 "dma_device_id": "system", 00:09:30.659 "dma_device_type": 1 00:09:30.659 } 00:09:30.659 ], 00:09:30.659 "name": "Nvme0n1", 00:09:30.659 "num_blocks": 38912, 00:09:30.659 "product_name": "NVMe disk", 00:09:30.659 "supported_io_types": { 00:09:30.659 "abort": true, 00:09:30.659 "compare": true, 00:09:30.659 "compare_and_write": true, 00:09:30.659 "copy": true, 00:09:30.659 "flush": true, 00:09:30.659 "get_zone_info": false, 00:09:30.659 "nvme_admin": true, 00:09:30.659 "nvme_io": true, 00:09:30.659 "nvme_io_md": false, 00:09:30.659 "nvme_iov_md": false, 00:09:30.659 "read": true, 00:09:30.659 "reset": true, 00:09:30.659 "seek_data": false, 00:09:30.659 "seek_hole": false, 00:09:30.659 "unmap": true, 00:09:30.659 "write": true, 00:09:30.659 "write_zeroes": true, 00:09:30.659 "zcopy": false, 00:09:30.659 "zone_append": false, 00:09:30.659 "zone_management": false 00:09:30.659 }, 00:09:30.659 "uuid": "0a811748-548b-49e9-bb3a-b6d9fbbf8837", 00:09:30.659 "zoned": false 00:09:30.659 } 00:09:30.659 ] 00:09:30.659 19:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73808 00:09:30.659 19:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:30.659 19:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:30.659 Running I/O for 10 seconds... 00:09:32.043 Latency(us) 00:09:32.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.043 Nvme0n1 : 1.00 8170.00 31.91 0.00 0.00 0.00 0.00 0.00 00:09:32.043 =================================================================================================================== 00:09:32.043 Total : 8170.00 31.91 0.00 0.00 0.00 0.00 0.00 00:09:32.043 00:09:32.606 19:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 13b02d9d-e3be-4547-9de2-be636dc6b745 00:09:32.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.863 Nvme0n1 : 2.00 8210.00 32.07 0.00 0.00 0.00 0.00 0.00 00:09:32.863 =================================================================================================================== 00:09:32.863 Total : 8210.00 32.07 0.00 0.00 0.00 0.00 0.00 00:09:32.863 00:09:32.863 true 00:09:32.863 19:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13b02d9d-e3be-4547-9de2-be636dc6b745 00:09:32.863 19:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:33.120 19:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:33.120 19:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:33.120 19:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 73808 00:09:33.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.683 Nvme0n1 : 3.00 8178.00 31.95 0.00 0.00 0.00 0.00 0.00 00:09:33.683 =================================================================================================================== 00:09:33.683 Total : 8178.00 31.95 0.00 0.00 0.00 0.00 0.00 00:09:33.683 00:09:35.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.052 Nvme0n1 : 4.00 8181.50 31.96 0.00 0.00 0.00 0.00 0.00 00:09:35.052 =================================================================================================================== 00:09:35.052 Total : 8181.50 31.96 0.00 0.00 0.00 0.00 0.00 00:09:35.052 00:09:35.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.985 Nvme0n1 : 5.00 8169.00 31.91 0.00 0.00 0.00 0.00 0.00 00:09:35.985 =================================================================================================================== 00:09:35.985 Total : 8169.00 31.91 0.00 0.00 0.00 0.00 0.00 00:09:35.985 00:09:36.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.921 Nvme0n1 : 6.00 8135.50 31.78 0.00 0.00 0.00 0.00 0.00 00:09:36.921 =================================================================================================================== 00:09:36.921 Total : 8135.50 31.78 0.00 0.00 0.00 0.00 0.00 00:09:36.921 00:09:37.855 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.855 Nvme0n1 : 7.00 8114.29 31.70 0.00 0.00 0.00 0.00 0.00 00:09:37.855 =================================================================================================================== 00:09:37.855 Total : 8114.29 31.70 0.00 0.00 0.00 0.00 0.00 00:09:37.855 00:09:38.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.787 Nvme0n1 : 8.00 8090.75 31.60 0.00 0.00 0.00 0.00 0.00 00:09:38.787 =================================================================================================================== 00:09:38.787 Total : 8090.75 31.60 0.00 0.00 0.00 0.00 0.00 00:09:38.787 00:09:39.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.719 Nvme0n1 : 9.00 8083.00 31.57 0.00 0.00 0.00 0.00 0.00 00:09:39.719 =================================================================================================================== 00:09:39.719 Total : 8083.00 31.57 0.00 0.00 0.00 0.00 0.00 00:09:39.719 00:09:40.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.651 Nvme0n1 : 10.00 8075.30 31.54 0.00 0.00 0.00 0.00 0.00 00:09:40.651 =================================================================================================================== 00:09:40.651 Total : 8075.30 31.54 0.00 0.00 0.00 0.00 0.00 00:09:40.651 00:09:40.651 00:09:40.651 Latency(us) 00:09:40.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.651 Nvme0n1 : 10.02 8074.94 31.54 0.00 0.00 15840.60 7626.01 31933.91 00:09:40.651 =================================================================================================================== 00:09:40.651 Total : 8074.94 31.54 0.00 0.00 15840.60 7626.01 31933.91 00:09:40.651 0 00:09:40.909 19:25:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73766 00:09:40.909 19:25:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 73766 ']' 00:09:40.909 19:25:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 73766 00:09:40.909 19:25:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:09:40.909 19:25:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:40.909 19:25:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73766 00:09:40.909 19:25:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:40.909 19:25:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:40.909 killing process with pid 73766 00:09:40.909 19:25:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73766' 00:09:40.909 Received shutdown signal, test time was about 10.000000 seconds 00:09:40.909 00:09:40.909 Latency(us) 00:09:40.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.909 =================================================================================================================== 00:09:40.909 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:40.909 19:25:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 73766 00:09:40.909 19:25:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 73766 00:09:40.909 19:25:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:41.167 19:25:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:41.731 19:25:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:41.731 19:25:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13b02d9d-e3be-4547-9de2-be636dc6b745 00:09:41.731 19:25:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:41.731 19:25:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:41.731 19:25:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:42.297 [2024-07-15 19:25:31.804069] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:42.297 19:25:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13b02d9d-e3be-4547-9de2-be636dc6b745 00:09:42.297 19:25:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:09:42.297 19:25:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13b02d9d-e3be-4547-9de2-be636dc6b745 00:09:42.297 19:25:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:42.297 19:25:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:42.297 19:25:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:42.297 19:25:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:42.297 19:25:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:42.297 19:25:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:42.297 19:25:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:42.297 19:25:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:42.297 19:25:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13b02d9d-e3be-4547-9de2-be636dc6b745 00:09:42.555 2024/07/15 19:25:32 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:13b02d9d-e3be-4547-9de2-be636dc6b745], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:09:42.555 request: 00:09:42.555 { 00:09:42.555 "method": "bdev_lvol_get_lvstores", 00:09:42.555 "params": { 00:09:42.555 "uuid": "13b02d9d-e3be-4547-9de2-be636dc6b745" 00:09:42.555 } 00:09:42.555 } 00:09:42.555 Got JSON-RPC error response 00:09:42.555 GoRPCClient: error on JSON-RPC call 00:09:42.555 19:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:09:42.555 19:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:42.555 19:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:42.555 19:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:42.555 19:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:42.811 aio_bdev 00:09:42.811 19:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0a811748-548b-49e9-bb3a-b6d9fbbf8837 00:09:42.811 19:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=0a811748-548b-49e9-bb3a-b6d9fbbf8837 00:09:42.811 19:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:42.811 19:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:09:42.811 19:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:42.811 19:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:42.811 19:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:43.069 19:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a811748-548b-49e9-bb3a-b6d9fbbf8837 -t 2000 00:09:43.327 [ 00:09:43.327 { 00:09:43.327 "aliases": [ 00:09:43.327 "lvs/lvol" 00:09:43.327 ], 00:09:43.327 "assigned_rate_limits": { 00:09:43.327 "r_mbytes_per_sec": 0, 00:09:43.327 "rw_ios_per_sec": 0, 00:09:43.327 "rw_mbytes_per_sec": 0, 00:09:43.327 "w_mbytes_per_sec": 0 00:09:43.327 }, 00:09:43.327 "block_size": 4096, 00:09:43.327 "claimed": false, 00:09:43.327 "driver_specific": { 00:09:43.327 "lvol": { 00:09:43.327 "base_bdev": "aio_bdev", 00:09:43.327 "clone": false, 00:09:43.327 "esnap_clone": false, 00:09:43.327 "lvol_store_uuid": "13b02d9d-e3be-4547-9de2-be636dc6b745", 00:09:43.327 "num_allocated_clusters": 38, 00:09:43.327 "snapshot": false, 00:09:43.327 "thin_provision": false 00:09:43.327 } 00:09:43.327 }, 00:09:43.327 "name": "0a811748-548b-49e9-bb3a-b6d9fbbf8837", 00:09:43.327 "num_blocks": 38912, 00:09:43.327 "product_name": "Logical Volume", 00:09:43.327 "supported_io_types": { 00:09:43.327 "abort": false, 00:09:43.327 "compare": false, 00:09:43.327 "compare_and_write": false, 00:09:43.327 "copy": false, 00:09:43.327 "flush": false, 00:09:43.327 "get_zone_info": false, 00:09:43.327 "nvme_admin": false, 00:09:43.327 "nvme_io": false, 00:09:43.327 "nvme_io_md": false, 00:09:43.327 "nvme_iov_md": false, 00:09:43.327 "read": true, 00:09:43.327 "reset": true, 00:09:43.327 "seek_data": true, 00:09:43.327 "seek_hole": true, 00:09:43.327 "unmap": true, 00:09:43.327 "write": true, 00:09:43.327 "write_zeroes": true, 00:09:43.327 "zcopy": false, 00:09:43.327 "zone_append": false, 00:09:43.327 "zone_management": false 00:09:43.327 }, 00:09:43.327 "uuid": "0a811748-548b-49e9-bb3a-b6d9fbbf8837", 00:09:43.327 "zoned": false 00:09:43.327 } 00:09:43.327 ] 00:09:43.327 19:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:09:43.327 19:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13b02d9d-e3be-4547-9de2-be636dc6b745 00:09:43.327 19:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:43.585 19:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:43.585 19:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13b02d9d-e3be-4547-9de2-be636dc6b745 00:09:43.585 19:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:43.844 19:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:43.844 19:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0a811748-548b-49e9-bb3a-b6d9fbbf8837 00:09:44.102 19:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 13b02d9d-e3be-4547-9de2-be636dc6b745 00:09:44.360 19:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:44.617 19:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:45.184 ************************************ 00:09:45.185 END TEST lvs_grow_clean 00:09:45.185 ************************************ 00:09:45.185 00:09:45.185 real 0m18.810s 00:09:45.185 user 0m18.160s 00:09:45.185 sys 0m2.109s 00:09:45.185 19:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:45.185 19:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:45.185 19:25:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:09:45.185 19:25:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:45.185 19:25:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:45.185 19:25:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.185 19:25:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.185 ************************************ 00:09:45.185 START TEST lvs_grow_dirty 00:09:45.185 ************************************ 00:09:45.185 19:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:09:45.185 19:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:45.185 19:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:45.185 19:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:45.185 19:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:45.185 19:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:45.185 19:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:45.185 19:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:45.185 19:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:45.185 19:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:45.443 19:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:45.443 19:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:45.702 19:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=181ce863-389d-4a68-b86f-143d583b09dc 00:09:45.702 19:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 181ce863-389d-4a68-b86f-143d583b09dc 00:09:45.702 19:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:45.960 19:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:45.960 19:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:45.960 19:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 181ce863-389d-4a68-b86f-143d583b09dc lvol 150 00:09:46.218 19:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=57f0a306-dfe9-4b33-84b6-9dd3ac427e4d 00:09:46.218 19:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:46.218 19:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:46.475 [2024-07-15 19:25:36.191315] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:46.475 [2024-07-15 19:25:36.191404] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:46.475 true 00:09:46.476 19:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 181ce863-389d-4a68-b86f-143d583b09dc 00:09:46.476 19:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:46.734 19:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:46.734 19:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:46.993 19:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 57f0a306-dfe9-4b33-84b6-9dd3ac427e4d 00:09:47.558 19:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:47.558 [2024-07-15 19:25:37.315997] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.558 19:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:47.816 19:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74214 00:09:47.816 19:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:47.816 19:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:47.816 19:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74214 /var/tmp/bdevperf.sock 00:09:47.816 19:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74214 ']' 00:09:47.816 19:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:47.816 19:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.816 19:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:47.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:47.816 19:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.816 19:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:48.075 [2024-07-15 19:25:37.643012] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:09:48.075 [2024-07-15 19:25:37.643117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74214 ] 00:09:48.075 [2024-07-15 19:25:37.781088] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.075 [2024-07-15 19:25:37.870054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.036 19:25:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:49.036 19:25:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:49.036 19:25:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:49.293 Nvme0n1 00:09:49.294 19:25:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:49.551 [ 00:09:49.552 { 00:09:49.552 "aliases": [ 00:09:49.552 "57f0a306-dfe9-4b33-84b6-9dd3ac427e4d" 00:09:49.552 ], 00:09:49.552 "assigned_rate_limits": { 00:09:49.552 "r_mbytes_per_sec": 0, 00:09:49.552 "rw_ios_per_sec": 0, 00:09:49.552 "rw_mbytes_per_sec": 0, 00:09:49.552 "w_mbytes_per_sec": 0 00:09:49.552 }, 00:09:49.552 "block_size": 4096, 00:09:49.552 "claimed": false, 00:09:49.552 "driver_specific": { 00:09:49.552 "mp_policy": "active_passive", 00:09:49.552 "nvme": [ 00:09:49.552 { 00:09:49.552 "ctrlr_data": { 00:09:49.552 "ana_reporting": false, 00:09:49.552 "cntlid": 1, 00:09:49.552 "firmware_revision": "24.09", 00:09:49.552 "model_number": "SPDK bdev Controller", 00:09:49.552 "multi_ctrlr": true, 00:09:49.552 "oacs": { 00:09:49.552 "firmware": 0, 00:09:49.552 "format": 0, 00:09:49.552 "ns_manage": 0, 00:09:49.552 "security": 0 00:09:49.552 }, 00:09:49.552 "serial_number": "SPDK0", 00:09:49.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:49.552 "vendor_id": "0x8086" 00:09:49.552 }, 00:09:49.552 "ns_data": { 00:09:49.552 "can_share": true, 00:09:49.552 "id": 1 00:09:49.552 }, 00:09:49.552 "trid": { 00:09:49.552 "adrfam": "IPv4", 00:09:49.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:49.552 "traddr": "10.0.0.2", 00:09:49.552 "trsvcid": "4420", 00:09:49.552 "trtype": "TCP" 00:09:49.552 }, 00:09:49.552 "vs": { 00:09:49.552 "nvme_version": "1.3" 00:09:49.552 } 00:09:49.552 } 00:09:49.552 ] 00:09:49.552 }, 00:09:49.552 "memory_domains": [ 00:09:49.552 { 00:09:49.552 "dma_device_id": "system", 00:09:49.552 "dma_device_type": 1 00:09:49.552 } 00:09:49.552 ], 00:09:49.552 "name": "Nvme0n1", 00:09:49.552 "num_blocks": 38912, 00:09:49.552 "product_name": "NVMe disk", 00:09:49.552 "supported_io_types": { 00:09:49.552 "abort": true, 00:09:49.552 "compare": true, 00:09:49.552 "compare_and_write": true, 00:09:49.552 "copy": true, 00:09:49.552 "flush": true, 00:09:49.552 "get_zone_info": false, 00:09:49.552 "nvme_admin": true, 00:09:49.552 "nvme_io": true, 00:09:49.552 "nvme_io_md": false, 00:09:49.552 "nvme_iov_md": false, 00:09:49.552 "read": true, 00:09:49.552 "reset": true, 00:09:49.552 "seek_data": false, 00:09:49.552 "seek_hole": false, 00:09:49.552 "unmap": true, 00:09:49.552 "write": true, 00:09:49.552 "write_zeroes": true, 00:09:49.552 "zcopy": false, 00:09:49.552 "zone_append": false, 00:09:49.552 "zone_management": false 00:09:49.552 }, 00:09:49.552 "uuid": "57f0a306-dfe9-4b33-84b6-9dd3ac427e4d", 00:09:49.552 "zoned": false 00:09:49.552 } 00:09:49.552 ] 00:09:49.552 19:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74266 00:09:49.552 19:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:49.552 19:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:49.552 Running I/O for 10 seconds... 00:09:50.485 Latency(us) 00:09:50.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.485 Nvme0n1 : 1.00 8366.00 32.68 0.00 0.00 0.00 0.00 0.00 00:09:50.485 =================================================================================================================== 00:09:50.485 Total : 8366.00 32.68 0.00 0.00 0.00 0.00 0.00 00:09:50.485 00:09:51.419 19:25:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 181ce863-389d-4a68-b86f-143d583b09dc 00:09:51.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.676 Nvme0n1 : 2.00 8278.00 32.34 0.00 0.00 0.00 0.00 0.00 00:09:51.677 =================================================================================================================== 00:09:51.677 Total : 8278.00 32.34 0.00 0.00 0.00 0.00 0.00 00:09:51.677 00:09:51.677 true 00:09:51.935 19:25:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 181ce863-389d-4a68-b86f-143d583b09dc 00:09:51.935 19:25:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:52.194 19:25:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:52.194 19:25:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:52.194 19:25:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 74266 00:09:52.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.760 Nvme0n1 : 3.00 8252.33 32.24 0.00 0.00 0.00 0.00 0.00 00:09:52.760 =================================================================================================================== 00:09:52.760 Total : 8252.33 32.24 0.00 0.00 0.00 0.00 0.00 00:09:52.760 00:09:53.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.694 Nvme0n1 : 4.00 8231.00 32.15 0.00 0.00 0.00 0.00 0.00 00:09:53.694 =================================================================================================================== 00:09:53.694 Total : 8231.00 32.15 0.00 0.00 0.00 0.00 0.00 00:09:53.694 00:09:54.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.627 Nvme0n1 : 5.00 8185.20 31.97 0.00 0.00 0.00 0.00 0.00 00:09:54.627 =================================================================================================================== 00:09:54.627 Total : 8185.20 31.97 0.00 0.00 0.00 0.00 0.00 00:09:54.627 00:09:55.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.560 Nvme0n1 : 6.00 8152.50 31.85 0.00 0.00 0.00 0.00 0.00 00:09:55.560 =================================================================================================================== 00:09:55.560 Total : 8152.50 31.85 0.00 0.00 0.00 0.00 0.00 00:09:55.560 00:09:56.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.543 Nvme0n1 : 7.00 7939.71 31.01 0.00 0.00 0.00 0.00 0.00 00:09:56.543 =================================================================================================================== 00:09:56.543 Total : 7939.71 31.01 0.00 0.00 0.00 0.00 0.00 00:09:56.543 00:09:57.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.479 Nvme0n1 : 8.00 7900.12 30.86 0.00 0.00 0.00 0.00 0.00 00:09:57.479 =================================================================================================================== 00:09:57.479 Total : 7900.12 30.86 0.00 0.00 0.00 0.00 0.00 00:09:57.479 00:09:58.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.854 Nvme0n1 : 9.00 7781.33 30.40 0.00 0.00 0.00 0.00 0.00 00:09:58.854 =================================================================================================================== 00:09:58.854 Total : 7781.33 30.40 0.00 0.00 0.00 0.00 0.00 00:09:58.854 00:09:59.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.790 Nvme0n1 : 10.00 7795.80 30.45 0.00 0.00 0.00 0.00 0.00 00:09:59.790 =================================================================================================================== 00:09:59.790 Total : 7795.80 30.45 0.00 0.00 0.00 0.00 0.00 00:09:59.790 00:09:59.790 00:09:59.790 Latency(us) 00:09:59.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.790 Nvme0n1 : 10.00 7804.78 30.49 0.00 0.00 16395.77 6166.34 147753.89 00:09:59.790 =================================================================================================================== 00:09:59.790 Total : 7804.78 30.49 0.00 0.00 16395.77 6166.34 147753.89 00:09:59.790 0 00:09:59.790 19:25:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74214 00:09:59.790 19:25:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 74214 ']' 00:09:59.790 19:25:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 74214 00:09:59.790 19:25:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:09:59.790 19:25:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:59.790 19:25:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74214 00:09:59.790 19:25:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:59.790 19:25:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:59.790 killing process with pid 74214 00:09:59.790 19:25:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74214' 00:09:59.790 19:25:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 74214 00:09:59.790 Received shutdown signal, test time was about 10.000000 seconds 00:09:59.790 00:09:59.790 Latency(us) 00:09:59.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.790 =================================================================================================================== 00:09:59.790 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:59.790 19:25:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 74214 00:09:59.790 19:25:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:00.048 19:25:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:00.307 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:00.307 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 181ce863-389d-4a68-b86f-143d583b09dc 00:10:00.566 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:00.566 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:00.566 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 73599 00:10:00.566 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 73599 00:10:00.825 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 73599 Killed "${NVMF_APP[@]}" "$@" 00:10:00.825 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:00.825 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:00.825 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:00.825 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:00.825 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:00.825 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=74430 00:10:00.825 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 74430 00:10:00.825 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:00.825 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74430 ']' 00:10:00.825 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.825 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:00.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.825 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.825 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:00.825 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:00.825 [2024-07-15 19:25:50.461386] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:10:00.825 [2024-07-15 19:25:50.461488] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.825 [2024-07-15 19:25:50.603145] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.084 [2024-07-15 19:25:50.657897] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.084 [2024-07-15 19:25:50.657974] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.084 [2024-07-15 19:25:50.657984] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.084 [2024-07-15 19:25:50.657992] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.084 [2024-07-15 19:25:50.657999] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.084 [2024-07-15 19:25:50.658025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.084 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.084 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:10:01.084 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:01.084 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:01.084 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:01.084 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.084 19:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:01.343 [2024-07-15 19:25:50.999991] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:01.343 [2024-07-15 19:25:51.000481] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:01.343 [2024-07-15 19:25:51.000809] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:01.343 19:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:01.343 19:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 57f0a306-dfe9-4b33-84b6-9dd3ac427e4d 00:10:01.343 19:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=57f0a306-dfe9-4b33-84b6-9dd3ac427e4d 00:10:01.343 19:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:01.343 19:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:01.343 19:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:01.343 19:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:01.343 19:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:01.602 19:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57f0a306-dfe9-4b33-84b6-9dd3ac427e4d -t 2000 00:10:01.861 [ 00:10:01.861 { 00:10:01.861 "aliases": [ 00:10:01.861 "lvs/lvol" 00:10:01.861 ], 00:10:01.861 "assigned_rate_limits": { 00:10:01.861 "r_mbytes_per_sec": 0, 00:10:01.861 "rw_ios_per_sec": 0, 00:10:01.861 "rw_mbytes_per_sec": 0, 00:10:01.861 "w_mbytes_per_sec": 0 00:10:01.861 }, 00:10:01.861 "block_size": 4096, 00:10:01.861 "claimed": false, 00:10:01.861 "driver_specific": { 00:10:01.861 "lvol": { 00:10:01.861 "base_bdev": "aio_bdev", 00:10:01.861 "clone": false, 00:10:01.861 "esnap_clone": false, 00:10:01.861 "lvol_store_uuid": "181ce863-389d-4a68-b86f-143d583b09dc", 00:10:01.861 "num_allocated_clusters": 38, 00:10:01.861 "snapshot": false, 00:10:01.861 "thin_provision": false 00:10:01.861 } 00:10:01.861 }, 00:10:01.861 "name": "57f0a306-dfe9-4b33-84b6-9dd3ac427e4d", 00:10:01.861 "num_blocks": 38912, 00:10:01.861 "product_name": "Logical Volume", 00:10:01.861 "supported_io_types": { 00:10:01.861 "abort": false, 00:10:01.861 "compare": false, 00:10:01.861 "compare_and_write": false, 00:10:01.861 "copy": false, 00:10:01.861 "flush": false, 00:10:01.861 "get_zone_info": false, 00:10:01.861 "nvme_admin": false, 00:10:01.861 "nvme_io": false, 00:10:01.861 "nvme_io_md": false, 00:10:01.861 "nvme_iov_md": false, 00:10:01.861 "read": true, 00:10:01.861 "reset": true, 00:10:01.861 "seek_data": true, 00:10:01.861 "seek_hole": true, 00:10:01.861 "unmap": true, 00:10:01.861 "write": true, 00:10:01.861 "write_zeroes": true, 00:10:01.861 "zcopy": false, 00:10:01.861 "zone_append": false, 00:10:01.861 "zone_management": false 00:10:01.861 }, 00:10:01.861 "uuid": "57f0a306-dfe9-4b33-84b6-9dd3ac427e4d", 00:10:01.861 "zoned": false 00:10:01.861 } 00:10:01.861 ] 00:10:01.861 19:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:01.861 19:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 181ce863-389d-4a68-b86f-143d583b09dc 00:10:01.861 19:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:02.120 19:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:02.120 19:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 181ce863-389d-4a68-b86f-143d583b09dc 00:10:02.120 19:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:02.686 19:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:02.686 19:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:02.686 [2024-07-15 19:25:52.437928] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:02.686 19:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 181ce863-389d-4a68-b86f-143d583b09dc 00:10:02.686 19:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:10:02.686 19:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 181ce863-389d-4a68-b86f-143d583b09dc 00:10:02.686 19:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.686 19:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.686 19:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.946 19:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.946 19:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.946 19:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.946 19:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.946 19:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:02.946 19:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 181ce863-389d-4a68-b86f-143d583b09dc 00:10:02.946 2024/07/15 19:25:52 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:181ce863-389d-4a68-b86f-143d583b09dc], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:02.946 request: 00:10:02.946 { 00:10:02.946 "method": "bdev_lvol_get_lvstores", 00:10:02.946 "params": { 00:10:02.946 "uuid": "181ce863-389d-4a68-b86f-143d583b09dc" 00:10:02.946 } 00:10:02.946 } 00:10:02.946 Got JSON-RPC error response 00:10:02.946 GoRPCClient: error on JSON-RPC call 00:10:02.946 19:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:10:02.946 19:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:02.946 19:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:02.946 19:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:02.946 19:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:03.517 aio_bdev 00:10:03.517 19:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 57f0a306-dfe9-4b33-84b6-9dd3ac427e4d 00:10:03.517 19:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=57f0a306-dfe9-4b33-84b6-9dd3ac427e4d 00:10:03.517 19:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:03.517 19:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:03.517 19:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:03.517 19:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:03.517 19:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:03.775 19:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57f0a306-dfe9-4b33-84b6-9dd3ac427e4d -t 2000 00:10:04.032 [ 00:10:04.032 { 00:10:04.032 "aliases": [ 00:10:04.032 "lvs/lvol" 00:10:04.032 ], 00:10:04.032 "assigned_rate_limits": { 00:10:04.032 "r_mbytes_per_sec": 0, 00:10:04.032 "rw_ios_per_sec": 0, 00:10:04.032 "rw_mbytes_per_sec": 0, 00:10:04.032 "w_mbytes_per_sec": 0 00:10:04.032 }, 00:10:04.032 "block_size": 4096, 00:10:04.032 "claimed": false, 00:10:04.032 "driver_specific": { 00:10:04.032 "lvol": { 00:10:04.032 "base_bdev": "aio_bdev", 00:10:04.032 "clone": false, 00:10:04.032 "esnap_clone": false, 00:10:04.032 "lvol_store_uuid": "181ce863-389d-4a68-b86f-143d583b09dc", 00:10:04.032 "num_allocated_clusters": 38, 00:10:04.032 "snapshot": false, 00:10:04.032 "thin_provision": false 00:10:04.032 } 00:10:04.032 }, 00:10:04.032 "name": "57f0a306-dfe9-4b33-84b6-9dd3ac427e4d", 00:10:04.032 "num_blocks": 38912, 00:10:04.032 "product_name": "Logical Volume", 00:10:04.032 "supported_io_types": { 00:10:04.032 "abort": false, 00:10:04.032 "compare": false, 00:10:04.032 "compare_and_write": false, 00:10:04.032 "copy": false, 00:10:04.032 "flush": false, 00:10:04.032 "get_zone_info": false, 00:10:04.032 "nvme_admin": false, 00:10:04.032 "nvme_io": false, 00:10:04.032 "nvme_io_md": false, 00:10:04.032 "nvme_iov_md": false, 00:10:04.032 "read": true, 00:10:04.032 "reset": true, 00:10:04.032 "seek_data": true, 00:10:04.032 "seek_hole": true, 00:10:04.032 "unmap": true, 00:10:04.032 "write": true, 00:10:04.032 "write_zeroes": true, 00:10:04.032 "zcopy": false, 00:10:04.032 "zone_append": false, 00:10:04.032 "zone_management": false 00:10:04.032 }, 00:10:04.032 "uuid": "57f0a306-dfe9-4b33-84b6-9dd3ac427e4d", 00:10:04.032 "zoned": false 00:10:04.032 } 00:10:04.032 ] 00:10:04.032 19:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:04.032 19:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 181ce863-389d-4a68-b86f-143d583b09dc 00:10:04.032 19:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:04.290 19:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:04.290 19:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 181ce863-389d-4a68-b86f-143d583b09dc 00:10:04.290 19:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:04.547 19:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:04.547 19:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 57f0a306-dfe9-4b33-84b6-9dd3ac427e4d 00:10:04.805 19:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 181ce863-389d-4a68-b86f-143d583b09dc 00:10:05.063 19:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:05.320 19:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:05.578 ************************************ 00:10:05.578 END TEST lvs_grow_dirty 00:10:05.578 ************************************ 00:10:05.578 00:10:05.578 real 0m20.509s 00:10:05.578 user 0m44.501s 00:10:05.578 sys 0m7.703s 00:10:05.578 19:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.578 19:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:05.578 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:05.578 19:25:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:05.578 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:10:05.578 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:10:05.578 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:05.578 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:05.578 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:05.578 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:05.578 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:05.578 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:05.578 nvmf_trace.0 00:10:05.578 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:10:05.578 19:25:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:05.578 19:25:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:05.578 19:25:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:10:05.836 19:25:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:05.836 19:25:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:10:05.836 19:25:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:05.836 19:25:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:05.836 rmmod nvme_tcp 00:10:05.836 rmmod nvme_fabrics 00:10:05.836 rmmod nvme_keyring 00:10:05.836 19:25:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:05.836 19:25:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:10:05.836 19:25:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:10:05.836 19:25:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 74430 ']' 00:10:05.836 19:25:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 74430 00:10:05.836 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 74430 ']' 00:10:05.836 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 74430 00:10:05.836 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:10:05.836 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:05.836 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74430 00:10:05.836 killing process with pid 74430 00:10:05.836 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:05.836 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:05.836 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74430' 00:10:05.836 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 74430 00:10:05.836 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 74430 00:10:06.095 19:25:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:06.095 19:25:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:06.095 19:25:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:06.095 19:25:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:06.095 19:25:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:06.095 19:25:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.095 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.095 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.095 19:25:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:06.095 00:10:06.095 real 0m41.777s 00:10:06.095 user 1m8.686s 00:10:06.095 sys 0m10.517s 00:10:06.095 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:06.095 19:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:06.095 ************************************ 00:10:06.095 END TEST nvmf_lvs_grow 00:10:06.095 ************************************ 00:10:06.095 19:25:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:06.095 19:25:55 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:06.095 19:25:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:06.095 19:25:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.095 19:25:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:06.095 ************************************ 00:10:06.095 START TEST nvmf_bdev_io_wait 00:10:06.095 ************************************ 00:10:06.095 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:06.354 * Looking for test storage... 00:10:06.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.354 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:06.355 19:25:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:06.355 Cannot find device "nvmf_tgt_br" 00:10:06.355 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:10:06.355 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:06.355 Cannot find device "nvmf_tgt_br2" 00:10:06.355 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:10:06.355 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:06.355 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:06.355 Cannot find device "nvmf_tgt_br" 00:10:06.355 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:10:06.355 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:06.355 Cannot find device "nvmf_tgt_br2" 00:10:06.355 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:10:06.355 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:06.355 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:06.355 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:06.355 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.355 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:06.355 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:06.355 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.355 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:06.355 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:06.355 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:06.355 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:06.355 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:06.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:10:06.613 00:10:06.613 --- 10.0.0.2 ping statistics --- 00:10:06.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.613 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:06.613 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:06.613 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:10:06.613 00:10:06.613 --- 10.0.0.3 ping statistics --- 00:10:06.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.613 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:06.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:10:06.613 00:10:06.613 --- 10.0.0.1 ping statistics --- 00:10:06.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.613 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=74831 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 74831 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 74831 ']' 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.613 19:25:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.613 [2024-07-15 19:25:56.415369] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:10:06.613 [2024-07-15 19:25:56.415496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.871 [2024-07-15 19:25:56.551632] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.871 [2024-07-15 19:25:56.622570] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.871 [2024-07-15 19:25:56.622818] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.871 [2024-07-15 19:25:56.622979] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.871 [2024-07-15 19:25:56.623138] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.871 [2024-07-15 19:25:56.623183] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.871 [2024-07-15 19:25:56.623442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.871 [2024-07-15 19:25:56.623658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.871 [2024-07-15 19:25:56.624171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.872 [2024-07-15 19:25:56.624212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:07.804 [2024-07-15 19:25:57.476545] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:07.804 Malloc0 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:07.804 [2024-07-15 19:25:57.523026] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74885 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=74887 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:07.804 { 00:10:07.804 "params": { 00:10:07.804 "name": "Nvme$subsystem", 00:10:07.804 "trtype": "$TEST_TRANSPORT", 00:10:07.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:07.804 "adrfam": "ipv4", 00:10:07.804 "trsvcid": "$NVMF_PORT", 00:10:07.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:07.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:07.804 "hdgst": ${hdgst:-false}, 00:10:07.804 "ddgst": ${ddgst:-false} 00:10:07.804 }, 00:10:07.804 "method": "bdev_nvme_attach_controller" 00:10:07.804 } 00:10:07.804 EOF 00:10:07.804 )") 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74889 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:07.804 { 00:10:07.804 "params": { 00:10:07.804 "name": "Nvme$subsystem", 00:10:07.804 "trtype": "$TEST_TRANSPORT", 00:10:07.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:07.804 "adrfam": "ipv4", 00:10:07.804 "trsvcid": "$NVMF_PORT", 00:10:07.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:07.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:07.804 "hdgst": ${hdgst:-false}, 00:10:07.804 "ddgst": ${ddgst:-false} 00:10:07.804 }, 00:10:07.804 "method": "bdev_nvme_attach_controller" 00:10:07.804 } 00:10:07.804 EOF 00:10:07.804 )") 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74892 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:07.804 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:07.805 { 00:10:07.805 "params": { 00:10:07.805 "name": "Nvme$subsystem", 00:10:07.805 "trtype": "$TEST_TRANSPORT", 00:10:07.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:07.805 "adrfam": "ipv4", 00:10:07.805 "trsvcid": "$NVMF_PORT", 00:10:07.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:07.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:07.805 "hdgst": ${hdgst:-false}, 00:10:07.805 "ddgst": ${ddgst:-false} 00:10:07.805 }, 00:10:07.805 "method": "bdev_nvme_attach_controller" 00:10:07.805 } 00:10:07.805 EOF 00:10:07.805 )") 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:07.805 { 00:10:07.805 "params": { 00:10:07.805 "name": "Nvme$subsystem", 00:10:07.805 "trtype": "$TEST_TRANSPORT", 00:10:07.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:07.805 "adrfam": "ipv4", 00:10:07.805 "trsvcid": "$NVMF_PORT", 00:10:07.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:07.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:07.805 "hdgst": ${hdgst:-false}, 00:10:07.805 "ddgst": ${ddgst:-false} 00:10:07.805 }, 00:10:07.805 "method": "bdev_nvme_attach_controller" 00:10:07.805 } 00:10:07.805 EOF 00:10:07.805 )") 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:07.805 "params": { 00:10:07.805 "name": "Nvme1", 00:10:07.805 "trtype": "tcp", 00:10:07.805 "traddr": "10.0.0.2", 00:10:07.805 "adrfam": "ipv4", 00:10:07.805 "trsvcid": "4420", 00:10:07.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:07.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:07.805 "hdgst": false, 00:10:07.805 "ddgst": false 00:10:07.805 }, 00:10:07.805 "method": "bdev_nvme_attach_controller" 00:10:07.805 }' 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:07.805 "params": { 00:10:07.805 "name": "Nvme1", 00:10:07.805 "trtype": "tcp", 00:10:07.805 "traddr": "10.0.0.2", 00:10:07.805 "adrfam": "ipv4", 00:10:07.805 "trsvcid": "4420", 00:10:07.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:07.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:07.805 "hdgst": false, 00:10:07.805 "ddgst": false 00:10:07.805 }, 00:10:07.805 "method": "bdev_nvme_attach_controller" 00:10:07.805 }' 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:07.805 "params": { 00:10:07.805 "name": "Nvme1", 00:10:07.805 "trtype": "tcp", 00:10:07.805 "traddr": "10.0.0.2", 00:10:07.805 "adrfam": "ipv4", 00:10:07.805 "trsvcid": "4420", 00:10:07.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:07.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:07.805 "hdgst": false, 00:10:07.805 "ddgst": false 00:10:07.805 }, 00:10:07.805 "method": "bdev_nvme_attach_controller" 00:10:07.805 }' 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:07.805 "params": { 00:10:07.805 "name": "Nvme1", 00:10:07.805 "trtype": "tcp", 00:10:07.805 "traddr": "10.0.0.2", 00:10:07.805 "adrfam": "ipv4", 00:10:07.805 "trsvcid": "4420", 00:10:07.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:07.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:07.805 "hdgst": false, 00:10:07.805 "ddgst": false 00:10:07.805 }, 00:10:07.805 "method": "bdev_nvme_attach_controller" 00:10:07.805 }' 00:10:07.805 [2024-07-15 19:25:57.589157] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:10:07.805 [2024-07-15 19:25:57.589750] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:07.805 [2024-07-15 19:25:57.591045] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:10:07.805 [2024-07-15 19:25:57.591116] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:07.805 19:25:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 74885 00:10:08.063 [2024-07-15 19:25:57.611458] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:10:08.063 [2024-07-15 19:25:57.611538] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:08.063 [2024-07-15 19:25:57.611582] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:10:08.063 [2024-07-15 19:25:57.611643] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:08.063 [2024-07-15 19:25:57.773659] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.063 [2024-07-15 19:25:57.817549] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.063 [2024-07-15 19:25:57.828307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:08.063 [2024-07-15 19:25:57.864503] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.320 [2024-07-15 19:25:57.872219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:08.320 [2024-07-15 19:25:57.909349] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.320 [2024-07-15 19:25:57.925387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:08.320 Running I/O for 1 seconds... 00:10:08.320 [2024-07-15 19:25:57.976791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:10:08.320 Running I/O for 1 seconds... 00:10:08.320 Running I/O for 1 seconds... 00:10:08.320 Running I/O for 1 seconds... 00:10:09.313 00:10:09.313 Latency(us) 00:10:09.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.313 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:09.313 Nvme1n1 : 1.02 6114.78 23.89 0.00 0.00 20794.67 9592.09 32648.84 00:10:09.313 =================================================================================================================== 00:10:09.313 Total : 6114.78 23.89 0.00 0.00 20794.67 9592.09 32648.84 00:10:09.313 00:10:09.313 Latency(us) 00:10:09.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.313 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:09.313 Nvme1n1 : 1.00 173603.96 678.14 0.00 0.00 734.42 364.92 1482.01 00:10:09.313 =================================================================================================================== 00:10:09.313 Total : 173603.96 678.14 0.00 0.00 734.42 364.92 1482.01 00:10:09.313 00:10:09.313 Latency(us) 00:10:09.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.313 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:09.313 Nvme1n1 : 1.01 8762.35 34.23 0.00 0.00 14538.71 8400.52 26571.87 00:10:09.313 =================================================================================================================== 00:10:09.313 Total : 8762.35 34.23 0.00 0.00 14538.71 8400.52 26571.87 00:10:09.313 00:10:09.313 Latency(us) 00:10:09.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.313 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:09.313 Nvme1n1 : 1.01 6391.93 24.97 0.00 0.00 19967.74 4974.78 46709.29 00:10:09.313 =================================================================================================================== 00:10:09.313 Total : 6391.93 24.97 0.00 0.00 19967.74 4974.78 46709.29 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 74887 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 74889 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 74892 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:09.571 rmmod nvme_tcp 00:10:09.571 rmmod nvme_fabrics 00:10:09.571 rmmod nvme_keyring 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 74831 ']' 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 74831 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 74831 ']' 00:10:09.571 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 74831 00:10:09.828 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:10:09.828 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:09.828 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74831 00:10:09.828 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:09.828 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:09.828 killing process with pid 74831 00:10:09.828 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74831' 00:10:09.828 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 74831 00:10:09.828 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 74831 00:10:09.828 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:09.828 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:09.828 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:09.828 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:09.828 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:09.828 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.828 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:09.828 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.828 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:09.828 00:10:09.828 real 0m3.715s 00:10:09.828 user 0m16.396s 00:10:09.828 sys 0m1.712s 00:10:09.828 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:09.828 19:25:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.828 ************************************ 00:10:09.828 END TEST nvmf_bdev_io_wait 00:10:09.828 ************************************ 00:10:09.828 19:25:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:09.828 19:25:59 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:09.828 19:25:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:09.828 19:25:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.828 19:25:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:09.828 ************************************ 00:10:09.828 START TEST nvmf_queue_depth 00:10:09.828 ************************************ 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:10.086 * Looking for test storage... 00:10:10.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:10.086 Cannot find device "nvmf_tgt_br" 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:10.086 Cannot find device "nvmf_tgt_br2" 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:10.086 Cannot find device "nvmf_tgt_br" 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:10.086 Cannot find device "nvmf_tgt_br2" 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:10.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:10.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:10.086 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:10.360 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:10.360 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:10.360 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:10.360 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:10.360 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:10.360 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:10.360 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:10.360 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:10.360 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:10.360 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:10.360 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:10.360 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:10.360 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:10.360 19:25:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:10.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:10:10.360 00:10:10.360 --- 10.0.0.2 ping statistics --- 00:10:10.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.360 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:10.360 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:10.360 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:10:10.360 00:10:10.360 --- 10.0.0.3 ping statistics --- 00:10:10.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.360 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:10.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:10.360 00:10:10.360 --- 10.0.0.1 ping statistics --- 00:10:10.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.360 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=75089 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 75089 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75089 ']' 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.360 19:26:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:10.360 [2024-07-15 19:26:00.137960] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:10:10.360 [2024-07-15 19:26:00.138054] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.620 [2024-07-15 19:26:00.278098] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.620 [2024-07-15 19:26:00.344739] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.620 [2024-07-15 19:26:00.344787] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.620 [2024-07-15 19:26:00.344799] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.620 [2024-07-15 19:26:00.344807] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.620 [2024-07-15 19:26:00.344814] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.620 [2024-07-15 19:26:00.344843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.555 [2024-07-15 19:26:01.164467] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.555 Malloc0 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.555 [2024-07-15 19:26:01.218175] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=75144 00:10:11.555 19:26:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:11.556 19:26:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:11.556 19:26:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 75144 /var/tmp/bdevperf.sock 00:10:11.556 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75144 ']' 00:10:11.556 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:11.556 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:11.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:11.556 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:11.556 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:11.556 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.556 [2024-07-15 19:26:01.276542] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:10:11.556 [2024-07-15 19:26:01.276667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75144 ] 00:10:11.814 [2024-07-15 19:26:01.415392] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.814 [2024-07-15 19:26:01.484920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.814 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.815 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:11.815 19:26:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:11.815 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.815 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.073 NVMe0n1 00:10:12.073 19:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.073 19:26:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:12.073 Running I/O for 10 seconds... 00:10:22.078 00:10:22.078 Latency(us) 00:10:22.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.078 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:22.079 Verification LBA range: start 0x0 length 0x4000 00:10:22.079 NVMe0n1 : 10.08 8692.79 33.96 0.00 0.00 117197.20 28835.84 81979.58 00:10:22.079 =================================================================================================================== 00:10:22.079 Total : 8692.79 33.96 0.00 0.00 117197.20 28835.84 81979.58 00:10:22.079 0 00:10:22.079 19:26:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 75144 00:10:22.079 19:26:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75144 ']' 00:10:22.079 19:26:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75144 00:10:22.079 19:26:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:22.079 19:26:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:22.079 19:26:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75144 00:10:22.338 19:26:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:22.338 19:26:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:22.338 killing process with pid 75144 00:10:22.338 19:26:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75144' 00:10:22.338 Received shutdown signal, test time was about 10.000000 seconds 00:10:22.338 00:10:22.338 Latency(us) 00:10:22.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.338 =================================================================================================================== 00:10:22.338 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:22.338 19:26:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75144 00:10:22.338 19:26:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75144 00:10:22.338 19:26:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:22.338 19:26:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:22.338 19:26:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:22.338 19:26:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:22.338 19:26:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:22.338 19:26:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:22.338 19:26:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:22.338 19:26:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:22.338 rmmod nvme_tcp 00:10:22.338 rmmod nvme_fabrics 00:10:22.338 rmmod nvme_keyring 00:10:22.338 19:26:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 75089 ']' 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 75089 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75089 ']' 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75089 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75089 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:22.605 killing process with pid 75089 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75089' 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75089 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75089 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:22.605 00:10:22.605 real 0m12.752s 00:10:22.605 user 0m21.953s 00:10:22.605 sys 0m1.770s 00:10:22.605 ************************************ 00:10:22.605 END TEST nvmf_queue_depth 00:10:22.605 ************************************ 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:22.605 19:26:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:22.864 19:26:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:22.864 19:26:12 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:22.864 19:26:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:22.864 19:26:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.864 19:26:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:22.864 ************************************ 00:10:22.864 START TEST nvmf_target_multipath 00:10:22.864 ************************************ 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:22.864 * Looking for test storage... 00:10:22.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.864 19:26:12 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:22.865 Cannot find device "nvmf_tgt_br" 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:22.865 Cannot find device "nvmf_tgt_br2" 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:22.865 Cannot find device "nvmf_tgt_br" 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:22.865 Cannot find device "nvmf_tgt_br2" 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:22.865 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:23.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:23.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:23.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:23.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:10:23.125 00:10:23.125 --- 10.0.0.2 ping statistics --- 00:10:23.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.125 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:23.125 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:23.125 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:10:23.125 00:10:23.125 --- 10.0.0.3 ping statistics --- 00:10:23.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.125 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:23.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:23.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:23.125 00:10:23.125 --- 10.0.0.1 ping statistics --- 00:10:23.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.125 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=75459 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 75459 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 75459 ']' 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:23.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:23.125 19:26:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:23.384 [2024-07-15 19:26:12.967559] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:10:23.384 [2024-07-15 19:26:12.967860] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.384 [2024-07-15 19:26:13.111032] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:23.384 [2024-07-15 19:26:13.183384] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:23.384 [2024-07-15 19:26:13.183808] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:23.384 [2024-07-15 19:26:13.183909] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:23.384 [2024-07-15 19:26:13.183995] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:23.384 [2024-07-15 19:26:13.184082] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:23.384 [2024-07-15 19:26:13.184348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.384 [2024-07-15 19:26:13.184510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.384 [2024-07-15 19:26:13.184690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.384 [2024-07-15 19:26:13.184695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.345 19:26:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:24.345 19:26:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:10:24.345 19:26:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:24.345 19:26:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:24.345 19:26:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:24.345 19:26:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:24.345 19:26:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:24.604 [2024-07-15 19:26:14.272604] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.604 19:26:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:24.863 Malloc0 00:10:24.863 19:26:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:25.121 19:26:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:25.378 19:26:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.636 [2024-07-15 19:26:15.364309] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.636 19:26:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:25.894 [2024-07-15 19:26:15.604509] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:25.894 19:26:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:26.152 19:26:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:26.411 19:26:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:26.411 19:26:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:10:26.411 19:26:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:26.411 19:26:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:26.411 19:26:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:28.314 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:28.315 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:28.315 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:28.315 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:28.315 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:28.315 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:28.315 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:28.315 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:28.315 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=75602 00:10:28.315 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:28.315 19:26:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:28.315 [global] 00:10:28.315 thread=1 00:10:28.315 invalidate=1 00:10:28.315 rw=randrw 00:10:28.315 time_based=1 00:10:28.315 runtime=6 00:10:28.315 ioengine=libaio 00:10:28.315 direct=1 00:10:28.315 bs=4096 00:10:28.315 iodepth=128 00:10:28.315 norandommap=0 00:10:28.315 numjobs=1 00:10:28.315 00:10:28.315 verify_dump=1 00:10:28.315 verify_backlog=512 00:10:28.315 verify_state_save=0 00:10:28.315 do_verify=1 00:10:28.315 verify=crc32c-intel 00:10:28.315 [job0] 00:10:28.315 filename=/dev/nvme0n1 00:10:28.315 Could not set queue depth (nvme0n1) 00:10:28.571 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.571 fio-3.35 00:10:28.571 Starting 1 thread 00:10:29.503 19:26:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:29.761 19:26:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:30.020 19:26:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:30.020 19:26:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:30.020 19:26:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:30.020 19:26:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:30.020 19:26:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:30.020 19:26:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:30.020 19:26:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:30.020 19:26:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:30.020 19:26:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:30.020 19:26:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:30.020 19:26:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:30.020 19:26:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:30.020 19:26:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:30.955 19:26:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:30.955 19:26:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:30.955 19:26:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:30.955 19:26:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:31.213 19:26:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:31.471 19:26:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:31.471 19:26:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:31.471 19:26:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:31.471 19:26:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:31.471 19:26:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:31.471 19:26:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:31.471 19:26:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:31.471 19:26:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:31.471 19:26:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:31.471 19:26:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:31.471 19:26:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:31.471 19:26:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:31.471 19:26:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:32.844 19:26:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:32.844 19:26:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:32.844 19:26:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:32.844 19:26:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 75602 00:10:34.743 00:10:34.743 job0: (groupid=0, jobs=1): err= 0: pid=75623: Mon Jul 15 19:26:24 2024 00:10:34.743 read: IOPS=10.8k, BW=42.1MiB/s (44.1MB/s)(253MiB/6005msec) 00:10:34.743 slat (usec): min=3, max=6020, avg=53.02, stdev=238.33 00:10:34.743 clat (usec): min=830, max=15027, avg=8114.97, stdev=1217.20 00:10:34.743 lat (usec): min=892, max=15037, avg=8167.98, stdev=1227.85 00:10:34.743 clat percentiles (usec): 00:10:34.743 | 1.00th=[ 4883], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 7373], 00:10:34.743 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8225], 00:10:34.743 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[10028], 00:10:34.743 | 99.00th=[11994], 99.50th=[12387], 99.90th=[13304], 99.95th=[13435], 00:10:34.743 | 99.99th=[14877] 00:10:34.743 bw ( KiB/s): min= 7744, max=28552, per=51.94%, avg=22380.36, stdev=6260.86, samples=11 00:10:34.743 iops : min= 1936, max= 7138, avg=5595.09, stdev=1565.22, samples=11 00:10:34.743 write: IOPS=6399, BW=25.0MiB/s (26.2MB/s)(133MiB/5312msec); 0 zone resets 00:10:34.743 slat (usec): min=4, max=2310, avg=64.42, stdev=159.60 00:10:34.743 clat (usec): min=440, max=13891, avg=6964.63, stdev=1056.70 00:10:34.743 lat (usec): min=691, max=13916, avg=7029.04, stdev=1060.62 00:10:34.743 clat percentiles (usec): 00:10:34.743 | 1.00th=[ 3818], 5.00th=[ 5014], 10.00th=[ 5866], 20.00th=[ 6390], 00:10:34.743 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7242], 00:10:34.743 | 70.00th=[ 7439], 80.00th=[ 7635], 90.00th=[ 7963], 95.00th=[ 8225], 00:10:34.743 | 99.00th=[10028], 99.50th=[10814], 99.90th=[12518], 99.95th=[12649], 00:10:34.743 | 99.99th=[13435] 00:10:34.743 bw ( KiB/s): min= 8192, max=27752, per=87.58%, avg=22420.36, stdev=5964.26, samples=11 00:10:34.743 iops : min= 2048, max= 6938, avg=5605.09, stdev=1491.07, samples=11 00:10:34.743 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:34.743 lat (msec) : 2=0.07%, 4=0.55%, 10=95.44%, 20=3.92% 00:10:34.743 cpu : usr=5.61%, sys=23.82%, ctx=6328, majf=0, minf=108 00:10:34.743 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:34.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.743 issued rwts: total=64689,33996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.743 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.743 00:10:34.743 Run status group 0 (all jobs): 00:10:34.743 READ: bw=42.1MiB/s (44.1MB/s), 42.1MiB/s-42.1MiB/s (44.1MB/s-44.1MB/s), io=253MiB (265MB), run=6005-6005msec 00:10:34.743 WRITE: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=133MiB (139MB), run=5312-5312msec 00:10:34.743 00:10:34.743 Disk stats (read/write): 00:10:34.743 nvme0n1: ios=63789/33280, merge=0/0, ticks=484880/215574, in_queue=700454, util=98.65% 00:10:34.743 19:26:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:35.002 19:26:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:35.260 19:26:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:35.260 19:26:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:35.260 19:26:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:35.260 19:26:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:35.260 19:26:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:35.260 19:26:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:35.260 19:26:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:35.261 19:26:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:35.261 19:26:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:35.261 19:26:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:35.261 19:26:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:35.261 19:26:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:10:35.261 19:26:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:36.195 19:26:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:36.195 19:26:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:36.195 19:26:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:36.195 19:26:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:36.195 19:26:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=75758 00:10:36.195 19:26:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:36.195 19:26:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:36.195 [global] 00:10:36.195 thread=1 00:10:36.195 invalidate=1 00:10:36.195 rw=randrw 00:10:36.195 time_based=1 00:10:36.195 runtime=6 00:10:36.195 ioengine=libaio 00:10:36.195 direct=1 00:10:36.195 bs=4096 00:10:36.195 iodepth=128 00:10:36.195 norandommap=0 00:10:36.195 numjobs=1 00:10:36.195 00:10:36.195 verify_dump=1 00:10:36.195 verify_backlog=512 00:10:36.195 verify_state_save=0 00:10:36.195 do_verify=1 00:10:36.195 verify=crc32c-intel 00:10:36.195 [job0] 00:10:36.195 filename=/dev/nvme0n1 00:10:36.195 Could not set queue depth (nvme0n1) 00:10:36.454 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:36.454 fio-3.35 00:10:36.454 Starting 1 thread 00:10:37.385 19:26:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:37.642 19:26:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:37.899 19:26:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:37.899 19:26:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:37.899 19:26:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:37.899 19:26:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:37.899 19:26:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:37.899 19:26:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:37.899 19:26:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:37.900 19:26:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:37.900 19:26:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:37.900 19:26:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:37.900 19:26:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:37.900 19:26:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:37.900 19:26:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:38.832 19:26:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:38.832 19:26:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:38.832 19:26:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:38.832 19:26:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:39.090 19:26:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:39.349 19:26:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:39.349 19:26:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:39.349 19:26:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:39.349 19:26:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:39.349 19:26:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:39.349 19:26:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:39.349 19:26:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:39.349 19:26:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:39.349 19:26:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:39.349 19:26:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:39.349 19:26:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:39.349 19:26:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:39.349 19:26:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:40.283 19:26:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:40.283 19:26:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:40.283 19:26:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:40.283 19:26:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 75758 00:10:42.810 00:10:42.810 job0: (groupid=0, jobs=1): err= 0: pid=75779: Mon Jul 15 19:26:32 2024 00:10:42.810 read: IOPS=11.9k, BW=46.3MiB/s (48.6MB/s)(278MiB/6003msec) 00:10:42.810 slat (usec): min=4, max=5572, avg=42.86, stdev=204.85 00:10:42.810 clat (usec): min=202, max=13773, avg=7408.94, stdev=1569.60 00:10:42.810 lat (usec): min=243, max=13786, avg=7451.80, stdev=1587.59 00:10:42.810 clat percentiles (usec): 00:10:42.810 | 1.00th=[ 3425], 5.00th=[ 4490], 10.00th=[ 5080], 20.00th=[ 6128], 00:10:42.810 | 30.00th=[ 7111], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7767], 00:10:42.810 | 70.00th=[ 8094], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[ 9634], 00:10:42.810 | 99.00th=[11338], 99.50th=[11863], 99.90th=[12387], 99.95th=[12518], 00:10:42.810 | 99.99th=[13435] 00:10:42.810 bw ( KiB/s): min=11376, max=40496, per=53.77%, avg=25510.55, stdev=9634.50, samples=11 00:10:42.810 iops : min= 2844, max=10124, avg=6377.64, stdev=2408.62, samples=11 00:10:42.810 write: IOPS=7355, BW=28.7MiB/s (30.1MB/s)(149MiB/5188msec); 0 zone resets 00:10:42.810 slat (usec): min=13, max=1904, avg=55.69, stdev=135.96 00:10:42.810 clat (usec): min=375, max=13152, avg=6140.99, stdev=1587.81 00:10:42.810 lat (usec): min=417, max=13173, avg=6196.68, stdev=1601.04 00:10:42.810 clat percentiles (usec): 00:10:42.810 | 1.00th=[ 2507], 5.00th=[ 3261], 10.00th=[ 3720], 20.00th=[ 4490], 00:10:42.810 | 30.00th=[ 5342], 40.00th=[ 6259], 50.00th=[ 6587], 60.00th=[ 6915], 00:10:42.810 | 70.00th=[ 7177], 80.00th=[ 7373], 90.00th=[ 7701], 95.00th=[ 8029], 00:10:42.810 | 99.00th=[ 9503], 99.50th=[10159], 99.90th=[11863], 99.95th=[12125], 00:10:42.810 | 99.99th=[13042] 00:10:42.810 bw ( KiB/s): min=11592, max=40960, per=86.69%, avg=25506.18, stdev=9377.20, samples=11 00:10:42.810 iops : min= 2898, max=10240, avg=6376.55, stdev=2344.30, samples=11 00:10:42.810 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:42.810 lat (msec) : 2=0.14%, 4=6.25%, 10=91.11%, 20=2.47% 00:10:42.810 cpu : usr=6.06%, sys=26.07%, ctx=8440, majf=0, minf=151 00:10:42.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:10:42.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.810 issued rwts: total=71198,38162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.810 00:10:42.810 Run status group 0 (all jobs): 00:10:42.810 READ: bw=46.3MiB/s (48.6MB/s), 46.3MiB/s-46.3MiB/s (48.6MB/s-48.6MB/s), io=278MiB (292MB), run=6003-6003msec 00:10:42.810 WRITE: bw=28.7MiB/s (30.1MB/s), 28.7MiB/s-28.7MiB/s (30.1MB/s-30.1MB/s), io=149MiB (156MB), run=5188-5188msec 00:10:42.810 00:10:42.810 Disk stats (read/write): 00:10:42.810 nvme0n1: ios=70220/37631, merge=0/0, ticks=478867/206822, in_queue=685689, util=98.67% 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:42.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:42.810 rmmod nvme_tcp 00:10:42.810 rmmod nvme_fabrics 00:10:42.810 rmmod nvme_keyring 00:10:42.810 19:26:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 75459 ']' 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 75459 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 75459 ']' 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 75459 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75459 00:10:43.068 killing process with pid 75459 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75459' 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 75459 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 75459 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:43.068 ************************************ 00:10:43.068 END TEST nvmf_target_multipath 00:10:43.068 ************************************ 00:10:43.068 00:10:43.068 real 0m20.410s 00:10:43.068 user 1m20.360s 00:10:43.068 sys 0m6.588s 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:43.068 19:26:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:43.327 19:26:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:43.327 19:26:32 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:43.327 19:26:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:43.327 19:26:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.327 19:26:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:43.327 ************************************ 00:10:43.327 START TEST nvmf_zcopy 00:10:43.327 ************************************ 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:43.327 * Looking for test storage... 00:10:43.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:43.327 19:26:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:43.327 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:43.327 19:26:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:43.327 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:43.327 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.327 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:43.327 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:43.327 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:43.327 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.327 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:43.328 Cannot find device "nvmf_tgt_br" 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:43.328 Cannot find device "nvmf_tgt_br2" 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:43.328 Cannot find device "nvmf_tgt_br" 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:43.328 Cannot find device "nvmf_tgt_br2" 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:43.328 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:43.588 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:43.588 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:43.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:10:43.588 00:10:43.588 --- 10.0.0.2 ping statistics --- 00:10:43.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.588 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:43.588 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:43.588 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:10:43.588 00:10:43.588 --- 10.0.0.3 ping statistics --- 00:10:43.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.588 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:43.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:43.588 00:10:43.588 --- 10.0.0.1 ping statistics --- 00:10:43.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.588 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=76059 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 76059 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 76059 ']' 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:43.588 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.846 [2024-07-15 19:26:33.398844] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:10:43.846 [2024-07-15 19:26:33.398926] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.846 [2024-07-15 19:26:33.537633] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.846 [2024-07-15 19:26:33.607652] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.846 [2024-07-15 19:26:33.607718] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.846 [2024-07-15 19:26:33.607732] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.846 [2024-07-15 19:26:33.607742] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.846 [2024-07-15 19:26:33.607751] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.846 [2024-07-15 19:26:33.607785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.105 [2024-07-15 19:26:33.741492] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.105 [2024-07-15 19:26:33.757568] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.105 malloc0 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:44.105 { 00:10:44.105 "params": { 00:10:44.105 "name": "Nvme$subsystem", 00:10:44.105 "trtype": "$TEST_TRANSPORT", 00:10:44.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:44.105 "adrfam": "ipv4", 00:10:44.105 "trsvcid": "$NVMF_PORT", 00:10:44.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:44.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:44.105 "hdgst": ${hdgst:-false}, 00:10:44.105 "ddgst": ${ddgst:-false} 00:10:44.105 }, 00:10:44.105 "method": "bdev_nvme_attach_controller" 00:10:44.105 } 00:10:44.105 EOF 00:10:44.105 )") 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:44.105 19:26:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:44.105 "params": { 00:10:44.105 "name": "Nvme1", 00:10:44.105 "trtype": "tcp", 00:10:44.105 "traddr": "10.0.0.2", 00:10:44.105 "adrfam": "ipv4", 00:10:44.105 "trsvcid": "4420", 00:10:44.105 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:44.105 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:44.105 "hdgst": false, 00:10:44.105 "ddgst": false 00:10:44.105 }, 00:10:44.105 "method": "bdev_nvme_attach_controller" 00:10:44.105 }' 00:10:44.105 [2024-07-15 19:26:33.838024] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:10:44.105 [2024-07-15 19:26:33.838103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76091 ] 00:10:44.364 [2024-07-15 19:26:33.973904] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.364 [2024-07-15 19:26:34.035036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.621 Running I/O for 10 seconds... 00:10:54.664 00:10:54.664 Latency(us) 00:10:54.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:54.664 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:54.664 Verification LBA range: start 0x0 length 0x1000 00:10:54.664 Nvme1n1 : 10.01 5785.43 45.20 0.00 0.00 22053.18 1273.48 41943.04 00:10:54.664 =================================================================================================================== 00:10:54.664 Total : 5785.43 45.20 0.00 0.00 22053.18 1273.48 41943.04 00:10:54.664 19:26:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:54.664 19:26:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=76214 00:10:54.664 19:26:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:54.664 19:26:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:54.664 19:26:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:54.664 19:26:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:54.664 19:26:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:54.664 19:26:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:54.664 19:26:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:54.664 { 00:10:54.664 "params": { 00:10:54.664 "name": "Nvme$subsystem", 00:10:54.664 "trtype": "$TEST_TRANSPORT", 00:10:54.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:54.664 "adrfam": "ipv4", 00:10:54.664 "trsvcid": "$NVMF_PORT", 00:10:54.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:54.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:54.664 "hdgst": ${hdgst:-false}, 00:10:54.664 "ddgst": ${ddgst:-false} 00:10:54.664 }, 00:10:54.664 "method": "bdev_nvme_attach_controller" 00:10:54.664 } 00:10:54.664 EOF 00:10:54.664 )") 00:10:54.664 19:26:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:54.664 [2024-07-15 19:26:44.364242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.664 [2024-07-15 19:26:44.364440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.664 19:26:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:54.664 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.664 19:26:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:54.664 19:26:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:54.664 "params": { 00:10:54.664 "name": "Nvme1", 00:10:54.664 "trtype": "tcp", 00:10:54.664 "traddr": "10.0.0.2", 00:10:54.664 "adrfam": "ipv4", 00:10:54.664 "trsvcid": "4420", 00:10:54.664 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:54.664 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:54.664 "hdgst": false, 00:10:54.664 "ddgst": false 00:10:54.664 }, 00:10:54.664 "method": "bdev_nvme_attach_controller" 00:10:54.664 }' 00:10:54.664 [2024-07-15 19:26:44.376215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.664 [2024-07-15 19:26:44.376261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.664 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.664 [2024-07-15 19:26:44.388212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.664 [2024-07-15 19:26:44.388240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.664 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.664 [2024-07-15 19:26:44.400211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.664 [2024-07-15 19:26:44.400236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.664 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.664 [2024-07-15 19:26:44.412228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.664 [2024-07-15 19:26:44.412253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.664 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.664 [2024-07-15 19:26:44.424243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.664 [2024-07-15 19:26:44.424269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.664 [2024-07-15 19:26:44.426453] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:10:54.664 [2024-07-15 19:26:44.426558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76214 ] 00:10:54.664 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.664 [2024-07-15 19:26:44.436245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.664 [2024-07-15 19:26:44.436269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.664 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.664 [2024-07-15 19:26:44.448251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.664 [2024-07-15 19:26:44.448277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.664 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.664 [2024-07-15 19:26:44.460269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.664 [2024-07-15 19:26:44.460299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.664 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.923 [2024-07-15 19:26:44.472263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.923 [2024-07-15 19:26:44.472286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.923 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.923 [2024-07-15 19:26:44.484259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.923 [2024-07-15 19:26:44.484284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.923 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.923 [2024-07-15 19:26:44.496260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.923 [2024-07-15 19:26:44.496285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.923 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.923 [2024-07-15 19:26:44.508262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.923 [2024-07-15 19:26:44.508286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.923 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.923 [2024-07-15 19:26:44.516258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.923 [2024-07-15 19:26:44.516279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.923 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.923 [2024-07-15 19:26:44.528279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.923 [2024-07-15 19:26:44.528299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.923 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.923 [2024-07-15 19:26:44.540294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.923 [2024-07-15 19:26:44.540316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.923 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.923 [2024-07-15 19:26:44.552289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.923 [2024-07-15 19:26:44.552313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.923 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.923 [2024-07-15 19:26:44.564307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.923 [2024-07-15 19:26:44.564336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.923 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.923 [2024-07-15 19:26:44.573814] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.923 [2024-07-15 19:26:44.576306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.923 [2024-07-15 19:26:44.576333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.923 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.923 [2024-07-15 19:26:44.588336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.923 [2024-07-15 19:26:44.588366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.923 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.923 [2024-07-15 19:26:44.600320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.923 [2024-07-15 19:26:44.600347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.923 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.923 [2024-07-15 19:26:44.612321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.923 [2024-07-15 19:26:44.612345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.923 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.923 [2024-07-15 19:26:44.624345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.923 [2024-07-15 19:26:44.624384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.924 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.924 [2024-07-15 19:26:44.636325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.924 [2024-07-15 19:26:44.636348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.924 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.924 [2024-07-15 19:26:44.648362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.924 [2024-07-15 19:26:44.648414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.924 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.924 [2024-07-15 19:26:44.660337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.924 [2024-07-15 19:26:44.660390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.924 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.924 [2024-07-15 19:26:44.664602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.924 [2024-07-15 19:26:44.672340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.924 [2024-07-15 19:26:44.672393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.924 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.924 [2024-07-15 19:26:44.684385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.924 [2024-07-15 19:26:44.684440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.924 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.924 [2024-07-15 19:26:44.696388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.924 [2024-07-15 19:26:44.696433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.924 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.924 [2024-07-15 19:26:44.708354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.924 [2024-07-15 19:26:44.708396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.924 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.924 [2024-07-15 19:26:44.720404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.924 [2024-07-15 19:26:44.720450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.924 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.183 [2024-07-15 19:26:44.732355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.183 [2024-07-15 19:26:44.732403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.183 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.183 [2024-07-15 19:26:44.744450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.183 [2024-07-15 19:26:44.744482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.183 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.183 [2024-07-15 19:26:44.756442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.183 [2024-07-15 19:26:44.756483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.183 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.183 [2024-07-15 19:26:44.768458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.183 [2024-07-15 19:26:44.768489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.183 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.183 [2024-07-15 19:26:44.780493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.183 [2024-07-15 19:26:44.780522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.183 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.183 [2024-07-15 19:26:44.792466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.183 [2024-07-15 19:26:44.792492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.184 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.184 [2024-07-15 19:26:44.804480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.184 [2024-07-15 19:26:44.804511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.184 Running I/O for 5 seconds... 00:10:55.184 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.184 [2024-07-15 19:26:44.822147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.184 [2024-07-15 19:26:44.822185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.184 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.184 [2024-07-15 19:26:44.839058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.184 [2024-07-15 19:26:44.839095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.184 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.184 [2024-07-15 19:26:44.855409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.184 [2024-07-15 19:26:44.855449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.184 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.184 [2024-07-15 19:26:44.872134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.184 [2024-07-15 19:26:44.872173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.184 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.184 [2024-07-15 19:26:44.888165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.184 [2024-07-15 19:26:44.888203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.184 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.184 [2024-07-15 19:26:44.899157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.184 [2024-07-15 19:26:44.899194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.184 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.184 [2024-07-15 19:26:44.914616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.184 [2024-07-15 19:26:44.914652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.184 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.184 [2024-07-15 19:26:44.931176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.184 [2024-07-15 19:26:44.931214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.184 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.184 [2024-07-15 19:26:44.947013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.184 [2024-07-15 19:26:44.947050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.184 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.184 [2024-07-15 19:26:44.964350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.184 [2024-07-15 19:26:44.964401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.184 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.184 [2024-07-15 19:26:44.980087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.184 [2024-07-15 19:26:44.980122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.184 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.443 [2024-07-15 19:26:44.993584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.443 [2024-07-15 19:26:44.993619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.443 2024/07/15 19:26:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.443 [2024-07-15 19:26:45.011101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.443 [2024-07-15 19:26:45.011138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.443 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.443 [2024-07-15 19:26:45.026968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.443 [2024-07-15 19:26:45.027003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.443 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.443 [2024-07-15 19:26:45.044626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.443 [2024-07-15 19:26:45.044661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.443 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.443 [2024-07-15 19:26:45.060593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.443 [2024-07-15 19:26:45.060628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.443 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.443 [2024-07-15 19:26:45.077609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.443 [2024-07-15 19:26:45.077644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.443 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.443 [2024-07-15 19:26:45.093727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.443 [2024-07-15 19:26:45.093764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.443 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.443 [2024-07-15 19:26:45.110946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.443 [2024-07-15 19:26:45.110982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.443 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.443 [2024-07-15 19:26:45.126487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.443 [2024-07-15 19:26:45.126523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.443 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.443 [2024-07-15 19:26:45.136761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.443 [2024-07-15 19:26:45.136800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.443 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.443 [2024-07-15 19:26:45.147793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.443 [2024-07-15 19:26:45.147829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.443 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.443 [2024-07-15 19:26:45.159469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.443 [2024-07-15 19:26:45.159505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.443 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.443 [2024-07-15 19:26:45.174929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.443 [2024-07-15 19:26:45.174964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.443 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.443 [2024-07-15 19:26:45.190320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.443 [2024-07-15 19:26:45.190383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.443 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.443 [2024-07-15 19:26:45.200969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.443 [2024-07-15 19:26:45.201006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.443 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.443 [2024-07-15 19:26:45.216057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.443 [2024-07-15 19:26:45.216093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.443 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.443 [2024-07-15 19:26:45.231737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.443 [2024-07-15 19:26:45.231776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.443 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.247257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.247295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.703 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.257331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.257380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.703 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.271813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.271849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.703 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.284270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.284305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.703 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.301341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.301404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.703 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.316481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.316515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.703 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.332109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.332144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.703 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.342907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.342943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.703 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.357962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.357999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.703 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.373321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.373374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.703 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.383055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.383090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.703 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.394695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.394773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.703 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.410487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.410523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.703 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.427576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.427612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.703 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.443015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.443049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.703 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.453329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.453390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.703 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.467683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.467715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.703 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.478393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.478427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.703 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.493115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.493152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.703 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.703 [2024-07-15 19:26:45.503319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.703 [2024-07-15 19:26:45.503370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.963 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.963 [2024-07-15 19:26:45.518483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.963 [2024-07-15 19:26:45.518520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.963 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.963 [2024-07-15 19:26:45.537540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.963 [2024-07-15 19:26:45.537575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.963 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.963 [2024-07-15 19:26:45.552751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.963 [2024-07-15 19:26:45.552788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.963 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.963 [2024-07-15 19:26:45.569388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.963 [2024-07-15 19:26:45.569423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.963 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.963 [2024-07-15 19:26:45.587156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.963 [2024-07-15 19:26:45.587207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.963 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.963 [2024-07-15 19:26:45.602787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.963 [2024-07-15 19:26:45.602821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.963 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.963 [2024-07-15 19:26:45.613087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.963 [2024-07-15 19:26:45.613123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.963 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.963 [2024-07-15 19:26:45.627689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.963 [2024-07-15 19:26:45.627724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.963 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.963 [2024-07-15 19:26:45.638451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.963 [2024-07-15 19:26:45.638486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.963 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.963 [2024-07-15 19:26:45.653340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.963 [2024-07-15 19:26:45.653404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.963 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.963 [2024-07-15 19:26:45.668884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.963 [2024-07-15 19:26:45.668922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.963 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.963 [2024-07-15 19:26:45.684506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.963 [2024-07-15 19:26:45.684546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.963 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.963 [2024-07-15 19:26:45.700932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.963 [2024-07-15 19:26:45.700968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.963 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.963 [2024-07-15 19:26:45.717883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.963 [2024-07-15 19:26:45.717919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.963 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.963 [2024-07-15 19:26:45.734261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.963 [2024-07-15 19:26:45.734297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.963 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.963 [2024-07-15 19:26:45.751940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.963 [2024-07-15 19:26:45.751986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.963 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.222 [2024-07-15 19:26:45.767747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.222 [2024-07-15 19:26:45.767821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.222 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.222 [2024-07-15 19:26:45.783767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.222 [2024-07-15 19:26:45.783819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.222 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.222 [2024-07-15 19:26:45.800181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.222 [2024-07-15 19:26:45.800247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.222 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.222 [2024-07-15 19:26:45.816871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.222 [2024-07-15 19:26:45.816928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.222 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.223 [2024-07-15 19:26:45.833451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.223 [2024-07-15 19:26:45.833504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.223 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.223 [2024-07-15 19:26:45.850142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.223 [2024-07-15 19:26:45.850218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.223 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.223 [2024-07-15 19:26:45.865991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.223 [2024-07-15 19:26:45.866043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.223 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.223 [2024-07-15 19:26:45.882360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.223 [2024-07-15 19:26:45.882429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.223 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.223 [2024-07-15 19:26:45.898652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.223 [2024-07-15 19:26:45.898745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.223 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.223 [2024-07-15 19:26:45.915750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.223 [2024-07-15 19:26:45.915805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.223 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.223 [2024-07-15 19:26:45.931741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.223 [2024-07-15 19:26:45.931775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.223 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.223 [2024-07-15 19:26:45.947338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.223 [2024-07-15 19:26:45.947392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.223 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.223 [2024-07-15 19:26:45.963471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.223 [2024-07-15 19:26:45.963508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.223 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.223 [2024-07-15 19:26:45.980767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.223 [2024-07-15 19:26:45.980818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.223 2024/07/15 19:26:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.223 [2024-07-15 19:26:45.997135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.223 [2024-07-15 19:26:45.997173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.223 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.223 [2024-07-15 19:26:46.013355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.223 [2024-07-15 19:26:46.013433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.223 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.482 [2024-07-15 19:26:46.029583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.482 [2024-07-15 19:26:46.029642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.482 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.482 [2024-07-15 19:26:46.046240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.482 [2024-07-15 19:26:46.046303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.482 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.482 [2024-07-15 19:26:46.062872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.482 [2024-07-15 19:26:46.062952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.482 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.482 [2024-07-15 19:26:46.079334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.482 [2024-07-15 19:26:46.079427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.483 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.483 [2024-07-15 19:26:46.095828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.483 [2024-07-15 19:26:46.095884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.483 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.483 [2024-07-15 19:26:46.112629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.483 [2024-07-15 19:26:46.112686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.483 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.483 [2024-07-15 19:26:46.128046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.483 [2024-07-15 19:26:46.128099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.483 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.483 [2024-07-15 19:26:46.143719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.483 [2024-07-15 19:26:46.143755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.483 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.483 [2024-07-15 19:26:46.161160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.483 [2024-07-15 19:26:46.161213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.483 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.483 [2024-07-15 19:26:46.176590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.483 [2024-07-15 19:26:46.176657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.483 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.483 [2024-07-15 19:26:46.186766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.483 [2024-07-15 19:26:46.186798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.483 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.483 [2024-07-15 19:26:46.201214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.483 [2024-07-15 19:26:46.201278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.483 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.483 [2024-07-15 19:26:46.218803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.483 [2024-07-15 19:26:46.218841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.483 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.483 [2024-07-15 19:26:46.235240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.483 [2024-07-15 19:26:46.235290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.483 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.483 [2024-07-15 19:26:46.251496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.483 [2024-07-15 19:26:46.251562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.483 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.483 [2024-07-15 19:26:46.267526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.483 [2024-07-15 19:26:46.267578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.483 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.483 [2024-07-15 19:26:46.283175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.483 [2024-07-15 19:26:46.283224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.742 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.742 [2024-07-15 19:26:46.299487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.742 [2024-07-15 19:26:46.299538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.742 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.742 [2024-07-15 19:26:46.315686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.742 [2024-07-15 19:26:46.315735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.742 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.742 [2024-07-15 19:26:46.333389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.742 [2024-07-15 19:26:46.333471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.742 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.742 [2024-07-15 19:26:46.348949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.742 [2024-07-15 19:26:46.348984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.742 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.742 [2024-07-15 19:26:46.364982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.742 [2024-07-15 19:26:46.365016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.742 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.742 [2024-07-15 19:26:46.380601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.742 [2024-07-15 19:26:46.380635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.742 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.742 [2024-07-15 19:26:46.396255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.743 [2024-07-15 19:26:46.396291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.743 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.743 [2024-07-15 19:26:46.412037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.743 [2024-07-15 19:26:46.412071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.743 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.743 [2024-07-15 19:26:46.428048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.743 [2024-07-15 19:26:46.428084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.743 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.743 [2024-07-15 19:26:46.444691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.743 [2024-07-15 19:26:46.444726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.743 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.743 [2024-07-15 19:26:46.454650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.743 [2024-07-15 19:26:46.454687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.743 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.743 [2024-07-15 19:26:46.470026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.743 [2024-07-15 19:26:46.470062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.743 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.743 [2024-07-15 19:26:46.486637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.743 [2024-07-15 19:26:46.486681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.743 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.743 [2024-07-15 19:26:46.502804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.743 [2024-07-15 19:26:46.502840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.743 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.743 [2024-07-15 19:26:46.520045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.743 [2024-07-15 19:26:46.520081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.743 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.743 [2024-07-15 19:26:46.535530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.743 [2024-07-15 19:26:46.535581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.743 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.002 [2024-07-15 19:26:46.546098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.002 [2024-07-15 19:26:46.546134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.002 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.002 [2024-07-15 19:26:46.561046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.002 [2024-07-15 19:26:46.561097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.002 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.002 [2024-07-15 19:26:46.578042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.002 [2024-07-15 19:26:46.578093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.002 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.002 [2024-07-15 19:26:46.595502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.002 [2024-07-15 19:26:46.595538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.002 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.002 [2024-07-15 19:26:46.611206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.002 [2024-07-15 19:26:46.611271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.002 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.002 [2024-07-15 19:26:46.627082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.002 [2024-07-15 19:26:46.627115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.002 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.002 [2024-07-15 19:26:46.642976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.002 [2024-07-15 19:26:46.643019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.002 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.002 [2024-07-15 19:26:46.660317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.002 [2024-07-15 19:26:46.660377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.002 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.002 [2024-07-15 19:26:46.675992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.002 [2024-07-15 19:26:46.676043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.002 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.002 [2024-07-15 19:26:46.686642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.002 [2024-07-15 19:26:46.686679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.002 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.002 [2024-07-15 19:26:46.701041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.002 [2024-07-15 19:26:46.701079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.002 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.002 [2024-07-15 19:26:46.717053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.002 [2024-07-15 19:26:46.717089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.002 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.002 [2024-07-15 19:26:46.733047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.002 [2024-07-15 19:26:46.733098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.002 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.002 [2024-07-15 19:26:46.749479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.002 [2024-07-15 19:26:46.749529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.002 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.002 [2024-07-15 19:26:46.764964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.002 [2024-07-15 19:26:46.765015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.002 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.002 [2024-07-15 19:26:46.782592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.002 [2024-07-15 19:26:46.782628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.002 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.002 [2024-07-15 19:26:46.798306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.002 [2024-07-15 19:26:46.798350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.002 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.261 [2024-07-15 19:26:46.815299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.261 [2024-07-15 19:26:46.815349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.261 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.261 [2024-07-15 19:26:46.831401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.261 [2024-07-15 19:26:46.831479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.261 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.261 [2024-07-15 19:26:46.849126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.261 [2024-07-15 19:26:46.849176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.261 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.261 [2024-07-15 19:26:46.864797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.261 [2024-07-15 19:26:46.864846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.261 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.261 [2024-07-15 19:26:46.881801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.261 [2024-07-15 19:26:46.881852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.261 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.261 [2024-07-15 19:26:46.897663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.261 [2024-07-15 19:26:46.897699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.261 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.261 [2024-07-15 19:26:46.914595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.261 [2024-07-15 19:26:46.914632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.261 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.261 [2024-07-15 19:26:46.931187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.261 [2024-07-15 19:26:46.931236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.261 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.261 [2024-07-15 19:26:46.947595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.261 [2024-07-15 19:26:46.947662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.261 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.261 [2024-07-15 19:26:46.963523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.261 [2024-07-15 19:26:46.963572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.261 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.261 [2024-07-15 19:26:46.982533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.261 [2024-07-15 19:26:46.982569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.262 2024/07/15 19:26:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.262 [2024-07-15 19:26:46.997071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.262 [2024-07-15 19:26:46.997120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.262 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.262 [2024-07-15 19:26:47.013761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.262 [2024-07-15 19:26:47.013830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.262 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.262 [2024-07-15 19:26:47.030125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.262 [2024-07-15 19:26:47.030176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.262 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.262 [2024-07-15 19:26:47.046240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.262 [2024-07-15 19:26:47.046307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.262 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.262 [2024-07-15 19:26:47.063944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.262 [2024-07-15 19:26:47.064010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.521 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.521 [2024-07-15 19:26:47.079803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.521 [2024-07-15 19:26:47.079840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.521 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.521 [2024-07-15 19:26:47.095963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.521 [2024-07-15 19:26:47.095999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.521 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.521 [2024-07-15 19:26:47.112937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.521 [2024-07-15 19:26:47.112974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.521 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.521 [2024-07-15 19:26:47.129263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.521 [2024-07-15 19:26:47.129305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.521 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.521 [2024-07-15 19:26:47.144940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.521 [2024-07-15 19:26:47.144974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.521 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.521 [2024-07-15 19:26:47.157307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.521 [2024-07-15 19:26:47.157344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.521 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.521 [2024-07-15 19:26:47.172648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.521 [2024-07-15 19:26:47.172699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.521 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.521 [2024-07-15 19:26:47.189285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.521 [2024-07-15 19:26:47.189335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.521 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.521 [2024-07-15 19:26:47.204975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.521 [2024-07-15 19:26:47.205025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.521 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.521 [2024-07-15 19:26:47.221165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.521 [2024-07-15 19:26:47.221217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.521 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.521 [2024-07-15 19:26:47.232027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.521 [2024-07-15 19:26:47.232077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.522 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.522 [2024-07-15 19:26:47.247180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.522 [2024-07-15 19:26:47.247215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.522 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.522 [2024-07-15 19:26:47.264717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.522 [2024-07-15 19:26:47.264754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.522 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.522 [2024-07-15 19:26:47.280852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.522 [2024-07-15 19:26:47.280902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.522 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.522 [2024-07-15 19:26:47.297995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.522 [2024-07-15 19:26:47.298046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.522 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.522 [2024-07-15 19:26:47.313779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.522 [2024-07-15 19:26:47.313828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.522 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.781 [2024-07-15 19:26:47.330700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.781 [2024-07-15 19:26:47.330736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.781 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.781 [2024-07-15 19:26:47.346642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.781 [2024-07-15 19:26:47.346678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.781 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.781 [2024-07-15 19:26:47.357140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.781 [2024-07-15 19:26:47.357189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.781 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.781 [2024-07-15 19:26:47.371985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.781 [2024-07-15 19:26:47.372034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.781 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.781 [2024-07-15 19:26:47.381995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.781 [2024-07-15 19:26:47.382060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.781 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.781 [2024-07-15 19:26:47.396846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.781 [2024-07-15 19:26:47.396898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.781 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.781 [2024-07-15 19:26:47.412701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.781 [2024-07-15 19:26:47.412750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.781 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.781 [2024-07-15 19:26:47.428966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.781 [2024-07-15 19:26:47.429018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.781 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.781 [2024-07-15 19:26:47.444762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.781 [2024-07-15 19:26:47.444811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.781 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.781 [2024-07-15 19:26:47.455812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.781 [2024-07-15 19:26:47.455861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.781 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.781 [2024-07-15 19:26:47.470486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.781 [2024-07-15 19:26:47.470521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.781 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.781 [2024-07-15 19:26:47.486501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.781 [2024-07-15 19:26:47.486545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.781 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.781 [2024-07-15 19:26:47.503248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.781 [2024-07-15 19:26:47.503285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.781 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.781 [2024-07-15 19:26:47.522621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.781 [2024-07-15 19:26:47.522671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.781 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.781 [2024-07-15 19:26:47.537485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.781 [2024-07-15 19:26:47.537551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.781 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.781 [2024-07-15 19:26:47.548245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.781 [2024-07-15 19:26:47.548296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.781 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.781 [2024-07-15 19:26:47.563146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.781 [2024-07-15 19:26:47.563198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.782 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.782 [2024-07-15 19:26:47.580533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.782 [2024-07-15 19:26:47.580569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.041 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.041 [2024-07-15 19:26:47.596198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.041 [2024-07-15 19:26:47.596248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.041 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.041 [2024-07-15 19:26:47.606684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.041 [2024-07-15 19:26:47.606720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.041 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.041 [2024-07-15 19:26:47.621658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.041 [2024-07-15 19:26:47.621695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.041 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.041 [2024-07-15 19:26:47.638734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.041 [2024-07-15 19:26:47.638802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.041 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.041 [2024-07-15 19:26:47.654075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.041 [2024-07-15 19:26:47.654125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.041 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.041 [2024-07-15 19:26:47.670029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.041 [2024-07-15 19:26:47.670065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.041 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.041 [2024-07-15 19:26:47.687512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.041 [2024-07-15 19:26:47.687547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.041 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.041 [2024-07-15 19:26:47.702933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.041 [2024-07-15 19:26:47.702999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.041 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.041 [2024-07-15 19:26:47.719711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.041 [2024-07-15 19:26:47.719764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.041 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.041 [2024-07-15 19:26:47.735455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.041 [2024-07-15 19:26:47.735504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.041 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.041 [2024-07-15 19:26:47.752716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.041 [2024-07-15 19:26:47.752754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.041 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.041 [2024-07-15 19:26:47.768806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.041 [2024-07-15 19:26:47.768856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.041 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.041 [2024-07-15 19:26:47.785787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.041 [2024-07-15 19:26:47.785837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.041 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.041 [2024-07-15 19:26:47.802319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.041 [2024-07-15 19:26:47.802379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.041 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.041 [2024-07-15 19:26:47.817443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.041 [2024-07-15 19:26:47.817480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.041 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.041 [2024-07-15 19:26:47.833894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.041 [2024-07-15 19:26:47.833928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.041 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.300 [2024-07-15 19:26:47.849861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.300 [2024-07-15 19:26:47.849895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.300 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.300 [2024-07-15 19:26:47.866314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.300 [2024-07-15 19:26:47.866385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.300 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.300 [2024-07-15 19:26:47.884082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.300 [2024-07-15 19:26:47.884115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.300 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.300 [2024-07-15 19:26:47.898813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.300 [2024-07-15 19:26:47.898847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.300 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.300 [2024-07-15 19:26:47.915985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.300 [2024-07-15 19:26:47.916018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.301 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.301 [2024-07-15 19:26:47.926553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.301 [2024-07-15 19:26:47.926588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.301 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.301 [2024-07-15 19:26:47.941179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.301 [2024-07-15 19:26:47.941212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.301 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.301 [2024-07-15 19:26:47.958175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.301 [2024-07-15 19:26:47.958208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.301 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.301 [2024-07-15 19:26:47.973420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.301 [2024-07-15 19:26:47.973455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.301 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.301 [2024-07-15 19:26:47.984297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.301 [2024-07-15 19:26:47.984330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.301 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.301 [2024-07-15 19:26:47.998916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.301 [2024-07-15 19:26:47.998948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.301 2024/07/15 19:26:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.301 [2024-07-15 19:26:48.009278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.301 [2024-07-15 19:26:48.009327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.301 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.301 [2024-07-15 19:26:48.023945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.301 [2024-07-15 19:26:48.023981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.301 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.301 [2024-07-15 19:26:48.034462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.301 [2024-07-15 19:26:48.034499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.301 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.301 [2024-07-15 19:26:48.045560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.301 [2024-07-15 19:26:48.045610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.301 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.301 [2024-07-15 19:26:48.056592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.301 [2024-07-15 19:26:48.056659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.301 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.301 [2024-07-15 19:26:48.071822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.301 [2024-07-15 19:26:48.071860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.301 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.301 [2024-07-15 19:26:48.087917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.301 [2024-07-15 19:26:48.087953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.301 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.560 [2024-07-15 19:26:48.104990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.560 [2024-07-15 19:26:48.105057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.560 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.560 [2024-07-15 19:26:48.120772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.560 [2024-07-15 19:26:48.120823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.560 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.560 [2024-07-15 19:26:48.138031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.560 [2024-07-15 19:26:48.138067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.560 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.560 [2024-07-15 19:26:48.153420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.560 [2024-07-15 19:26:48.153457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.560 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.560 [2024-07-15 19:26:48.170401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.560 [2024-07-15 19:26:48.170437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.560 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.560 [2024-07-15 19:26:48.186073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.560 [2024-07-15 19:26:48.186124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.560 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.560 [2024-07-15 19:26:48.196868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.560 [2024-07-15 19:26:48.196917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.560 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.560 [2024-07-15 19:26:48.211980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.560 [2024-07-15 19:26:48.212028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.560 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.560 [2024-07-15 19:26:48.228456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.560 [2024-07-15 19:26:48.228504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.560 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.560 [2024-07-15 19:26:48.245514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.560 [2024-07-15 19:26:48.245564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.560 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.560 [2024-07-15 19:26:48.262466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.560 [2024-07-15 19:26:48.262499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.560 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.560 [2024-07-15 19:26:48.284765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.560 [2024-07-15 19:26:48.284851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.560 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.560 [2024-07-15 19:26:48.301123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.560 [2024-07-15 19:26:48.301175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.560 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.560 [2024-07-15 19:26:48.317110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.560 [2024-07-15 19:26:48.317160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.560 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.560 [2024-07-15 19:26:48.334217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.560 [2024-07-15 19:26:48.334270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.560 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.560 [2024-07-15 19:26:48.350093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.560 [2024-07-15 19:26:48.350128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.560 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.820 [2024-07-15 19:26:48.367495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.820 [2024-07-15 19:26:48.367532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.820 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.820 [2024-07-15 19:26:48.383232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.820 [2024-07-15 19:26:48.383284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.820 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.820 [2024-07-15 19:26:48.399561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.820 [2024-07-15 19:26:48.399596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.820 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.820 [2024-07-15 19:26:48.416513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.820 [2024-07-15 19:26:48.416562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.820 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.820 [2024-07-15 19:26:48.432285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.820 [2024-07-15 19:26:48.432334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.820 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.820 [2024-07-15 19:26:48.442771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.820 [2024-07-15 19:26:48.442838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.820 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.820 [2024-07-15 19:26:48.458017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.820 [2024-07-15 19:26:48.458066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.820 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.820 [2024-07-15 19:26:48.468815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.820 [2024-07-15 19:26:48.468851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.820 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.820 [2024-07-15 19:26:48.484017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.821 [2024-07-15 19:26:48.484066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.821 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.821 [2024-07-15 19:26:48.501147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.821 [2024-07-15 19:26:48.501198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.821 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.821 [2024-07-15 19:26:48.517566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.821 [2024-07-15 19:26:48.517614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.821 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.821 [2024-07-15 19:26:48.534220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.821 [2024-07-15 19:26:48.534273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.821 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.821 [2024-07-15 19:26:48.550803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.821 [2024-07-15 19:26:48.550855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.821 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.821 [2024-07-15 19:26:48.567367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.821 [2024-07-15 19:26:48.567447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.821 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.821 [2024-07-15 19:26:48.583318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.821 [2024-07-15 19:26:48.583367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.821 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.821 [2024-07-15 19:26:48.593494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.821 [2024-07-15 19:26:48.593544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.821 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.821 [2024-07-15 19:26:48.609184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.821 [2024-07-15 19:26:48.609234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.821 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.080 [2024-07-15 19:26:48.627947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.080 [2024-07-15 19:26:48.627984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.080 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.080 [2024-07-15 19:26:48.643430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.080 [2024-07-15 19:26:48.643489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.080 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.080 [2024-07-15 19:26:48.658880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.080 [2024-07-15 19:26:48.658929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.080 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.080 [2024-07-15 19:26:48.674638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.080 [2024-07-15 19:26:48.674709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.080 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.080 [2024-07-15 19:26:48.691517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.080 [2024-07-15 19:26:48.691553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.080 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.080 [2024-07-15 19:26:48.706938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.080 [2024-07-15 19:26:48.706992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.080 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.080 [2024-07-15 19:26:48.723852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.080 [2024-07-15 19:26:48.723885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.080 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.080 [2024-07-15 19:26:48.739391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.080 [2024-07-15 19:26:48.739426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.080 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.080 [2024-07-15 19:26:48.755186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.081 [2024-07-15 19:26:48.755222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.081 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.081 [2024-07-15 19:26:48.765974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.081 [2024-07-15 19:26:48.766010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.081 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.081 [2024-07-15 19:26:48.780366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.081 [2024-07-15 19:26:48.780414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.081 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.081 [2024-07-15 19:26:48.795846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.081 [2024-07-15 19:26:48.795882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.081 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.081 [2024-07-15 19:26:48.812991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.081 [2024-07-15 19:26:48.813028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.081 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.081 [2024-07-15 19:26:48.828885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.081 [2024-07-15 19:26:48.828923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.081 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.081 [2024-07-15 19:26:48.845844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.081 [2024-07-15 19:26:48.845894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.081 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.081 [2024-07-15 19:26:48.861948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.081 [2024-07-15 19:26:48.861998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.081 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.081 [2024-07-15 19:26:48.880056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.081 [2024-07-15 19:26:48.880107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.340 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.340 [2024-07-15 19:26:48.895570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.340 [2024-07-15 19:26:48.895604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.340 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.340 [2024-07-15 19:26:48.912968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.340 [2024-07-15 19:26:48.913020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.340 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.340 [2024-07-15 19:26:48.929704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.340 [2024-07-15 19:26:48.929741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.340 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.340 [2024-07-15 19:26:48.946048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.340 [2024-07-15 19:26:48.946098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.340 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.340 [2024-07-15 19:26:48.963532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.340 [2024-07-15 19:26:48.963613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.340 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.340 [2024-07-15 19:26:48.978979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.340 [2024-07-15 19:26:48.979028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.340 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.340 [2024-07-15 19:26:48.994931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.340 [2024-07-15 19:26:48.994985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.340 2024/07/15 19:26:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.340 [2024-07-15 19:26:49.012286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.340 [2024-07-15 19:26:49.012336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.340 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.340 [2024-07-15 19:26:49.027903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.340 [2024-07-15 19:26:49.027954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.340 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.340 [2024-07-15 19:26:49.046466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.340 [2024-07-15 19:26:49.046502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.340 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.340 [2024-07-15 19:26:49.062070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.340 [2024-07-15 19:26:49.062107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.340 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.340 [2024-07-15 19:26:49.079224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.340 [2024-07-15 19:26:49.079277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.340 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.340 [2024-07-15 19:26:49.096721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.340 [2024-07-15 19:26:49.096759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.340 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.341 [2024-07-15 19:26:49.112257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.341 [2024-07-15 19:26:49.112294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.341 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.341 [2024-07-15 19:26:49.129314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.341 [2024-07-15 19:26:49.129353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.341 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.600 [2024-07-15 19:26:49.145541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.600 [2024-07-15 19:26:49.145576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.600 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.600 [2024-07-15 19:26:49.162215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.600 [2024-07-15 19:26:49.162253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.600 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.600 [2024-07-15 19:26:49.178163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.600 [2024-07-15 19:26:49.178200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.600 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.600 [2024-07-15 19:26:49.195131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.600 [2024-07-15 19:26:49.195169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.600 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.600 [2024-07-15 19:26:49.210560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.600 [2024-07-15 19:26:49.210597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.600 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.600 [2024-07-15 19:26:49.227192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.600 [2024-07-15 19:26:49.227243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.600 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.600 [2024-07-15 19:26:49.242842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.600 [2024-07-15 19:26:49.242888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.600 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.600 [2024-07-15 19:26:49.259829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.600 [2024-07-15 19:26:49.259867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.600 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.601 [2024-07-15 19:26:49.275932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.601 [2024-07-15 19:26:49.275985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.601 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.601 [2024-07-15 19:26:49.293176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.601 [2024-07-15 19:26:49.293210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.601 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.601 [2024-07-15 19:26:49.309004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.601 [2024-07-15 19:26:49.309041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.601 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.601 [2024-07-15 19:26:49.326641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.601 [2024-07-15 19:26:49.326679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.601 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.601 [2024-07-15 19:26:49.342861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.601 [2024-07-15 19:26:49.342912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.601 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.601 [2024-07-15 19:26:49.359474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.601 [2024-07-15 19:26:49.359572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.601 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.601 [2024-07-15 19:26:49.376957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.601 [2024-07-15 19:26:49.377031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.601 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.601 [2024-07-15 19:26:49.392337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.601 [2024-07-15 19:26:49.392417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.601 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.860 [2024-07-15 19:26:49.403449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.860 [2024-07-15 19:26:49.403517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.860 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.860 [2024-07-15 19:26:49.418081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.860 [2024-07-15 19:26:49.418137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.860 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.860 [2024-07-15 19:26:49.429937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.860 [2024-07-15 19:26:49.430012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.860 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.860 [2024-07-15 19:26:49.448450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.860 [2024-07-15 19:26:49.448513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.860 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.860 [2024-07-15 19:26:49.464053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.860 [2024-07-15 19:26:49.464110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.860 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.860 [2024-07-15 19:26:49.481626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.860 [2024-07-15 19:26:49.481683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.860 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.860 [2024-07-15 19:26:49.498199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.860 [2024-07-15 19:26:49.498235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.860 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.860 [2024-07-15 19:26:49.515118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.860 [2024-07-15 19:26:49.515156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.860 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.860 [2024-07-15 19:26:49.531310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.860 [2024-07-15 19:26:49.531388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.860 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.860 [2024-07-15 19:26:49.547895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.860 [2024-07-15 19:26:49.547962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.860 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.860 [2024-07-15 19:26:49.564494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.860 [2024-07-15 19:26:49.564544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.860 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.860 [2024-07-15 19:26:49.581176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.860 [2024-07-15 19:26:49.581214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.860 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.860 [2024-07-15 19:26:49.597092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.860 [2024-07-15 19:26:49.597143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.860 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.860 [2024-07-15 19:26:49.608242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.860 [2024-07-15 19:26:49.608306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.861 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.861 [2024-07-15 19:26:49.623306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.861 [2024-07-15 19:26:49.623341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.861 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.861 [2024-07-15 19:26:49.640026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.861 [2024-07-15 19:26:49.640063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.861 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.861 [2024-07-15 19:26:49.656919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.861 [2024-07-15 19:26:49.656956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.861 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.120 [2024-07-15 19:26:49.672810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.120 [2024-07-15 19:26:49.672878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.120 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.120 [2024-07-15 19:26:49.683211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.120 [2024-07-15 19:26:49.683261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.120 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.120 [2024-07-15 19:26:49.698183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.120 [2024-07-15 19:26:49.698250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.120 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.120 [2024-07-15 19:26:49.713683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.120 [2024-07-15 19:26:49.713735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.120 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.120 [2024-07-15 19:26:49.724597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.120 [2024-07-15 19:26:49.724648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.120 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.120 [2024-07-15 19:26:49.739262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.120 [2024-07-15 19:26:49.739299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.120 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.120 [2024-07-15 19:26:49.750552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.120 [2024-07-15 19:26:49.750586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.121 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.121 [2024-07-15 19:26:49.765314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.121 [2024-07-15 19:26:49.765390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.121 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.121 [2024-07-15 19:26:49.782772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.121 [2024-07-15 19:26:49.782824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.121 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.121 [2024-07-15 19:26:49.798136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.121 [2024-07-15 19:26:49.798188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.121 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.121 [2024-07-15 19:26:49.807023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.121 [2024-07-15 19:26:49.807075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.121 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.121 [2024-07-15 19:26:49.819124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.121 [2024-07-15 19:26:49.819196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.121 00:11:00.121 Latency(us) 00:11:00.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.121 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:00.121 Nvme1n1 : 5.01 11420.48 89.22 0.00 0.00 11193.04 3038.49 17754.30 00:11:00.121 =================================================================================================================== 00:11:00.121 Total : 11420.48 89.22 0.00 0.00 11193.04 3038.49 17754.30 00:11:00.121 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.121 [2024-07-15 19:26:49.831133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.121 [2024-07-15 19:26:49.831165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.121 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.121 [2024-07-15 19:26:49.843148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.121 [2024-07-15 19:26:49.843196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.121 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.121 [2024-07-15 19:26:49.855131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.121 [2024-07-15 19:26:49.855175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.121 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.121 [2024-07-15 19:26:49.867155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.121 [2024-07-15 19:26:49.867196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.121 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.121 [2024-07-15 19:26:49.879169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.121 [2024-07-15 19:26:49.879222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.121 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.121 [2024-07-15 19:26:49.891169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.121 [2024-07-15 19:26:49.891235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.121 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.121 [2024-07-15 19:26:49.903175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.121 [2024-07-15 19:26:49.903214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.121 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.121 [2024-07-15 19:26:49.915149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.121 [2024-07-15 19:26:49.915185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.121 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.381 [2024-07-15 19:26:49.927151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.381 [2024-07-15 19:26:49.927192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.381 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.381 [2024-07-15 19:26:49.939177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.381 [2024-07-15 19:26:49.939213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.381 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.381 [2024-07-15 19:26:49.951184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.381 [2024-07-15 19:26:49.951247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.381 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.381 [2024-07-15 19:26:49.963199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.381 [2024-07-15 19:26:49.963239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.381 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.381 [2024-07-15 19:26:49.975198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.381 [2024-07-15 19:26:49.975250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.381 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.381 [2024-07-15 19:26:49.987171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.381 [2024-07-15 19:26:49.987200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.381 2024/07/15 19:26:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.381 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (76214) - No such process 00:11:00.381 19:26:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 76214 00:11:00.381 19:26:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.381 19:26:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.381 19:26:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:00.381 19:26:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.381 19:26:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:00.381 19:26:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.381 19:26:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:00.381 delay0 00:11:00.381 19:26:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.381 19:26:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:00.381 19:26:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.381 19:26:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:00.381 19:26:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.381 19:26:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:00.640 [2024-07-15 19:26:50.187959] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:07.221 Initializing NVMe Controllers 00:11:07.221 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:07.221 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:07.221 Initialization complete. Launching workers. 00:11:07.221 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 104 00:11:07.221 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 391, failed to submit 33 00:11:07.221 success 185, unsuccess 206, failed 0 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:07.221 rmmod nvme_tcp 00:11:07.221 rmmod nvme_fabrics 00:11:07.221 rmmod nvme_keyring 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 76059 ']' 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 76059 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 76059 ']' 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 76059 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76059 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76059' 00:11:07.221 killing process with pid 76059 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 76059 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 76059 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:07.221 00:11:07.221 real 0m23.670s 00:11:07.221 user 0m38.880s 00:11:07.221 sys 0m6.378s 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:07.221 19:26:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:07.221 ************************************ 00:11:07.221 END TEST nvmf_zcopy 00:11:07.221 ************************************ 00:11:07.221 19:26:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:07.221 19:26:56 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:07.221 19:26:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:07.221 19:26:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:07.221 19:26:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:07.221 ************************************ 00:11:07.221 START TEST nvmf_nmic 00:11:07.221 ************************************ 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:07.221 * Looking for test storage... 00:11:07.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:07.221 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:07.222 Cannot find device "nvmf_tgt_br" 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:07.222 Cannot find device "nvmf_tgt_br2" 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:07.222 Cannot find device "nvmf_tgt_br" 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:07.222 Cannot find device "nvmf_tgt_br2" 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:07.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:07.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:07.222 19:26:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:07.222 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:07.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:11:07.481 00:11:07.481 --- 10.0.0.2 ping statistics --- 00:11:07.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.481 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:07.481 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:07.481 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:11:07.481 00:11:07.481 --- 10.0.0.3 ping statistics --- 00:11:07.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.481 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:07.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:11:07.481 00:11:07.481 --- 10.0.0.1 ping statistics --- 00:11:07.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.481 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=76533 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 76533 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 76533 ']' 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:07.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:07.481 19:26:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.481 [2024-07-15 19:26:57.145793] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:11:07.481 [2024-07-15 19:26:57.145924] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.741 [2024-07-15 19:26:57.286224] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.741 [2024-07-15 19:26:57.361999] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.741 [2024-07-15 19:26:57.362068] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.741 [2024-07-15 19:26:57.362093] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.741 [2024-07-15 19:26:57.362103] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.741 [2024-07-15 19:26:57.362111] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.741 [2024-07-15 19:26:57.362934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.741 [2024-07-15 19:26:57.363016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.741 [2024-07-15 19:26:57.363117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.741 [2024-07-15 19:26:57.363110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.696 [2024-07-15 19:26:58.204668] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.696 Malloc0 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.696 [2024-07-15 19:26:58.266173] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.696 test case1: single bdev can't be used in multiple subsystems 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.696 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.696 [2024-07-15 19:26:58.293980] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:08.696 [2024-07-15 19:26:58.294023] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:08.696 [2024-07-15 19:26:58.294034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.696 2024/07/15 19:26:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:08.697 request: 00:11:08.697 { 00:11:08.697 "method": "nvmf_subsystem_add_ns", 00:11:08.697 "params": { 00:11:08.697 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:08.697 "namespace": { 00:11:08.697 "bdev_name": "Malloc0", 00:11:08.697 "no_auto_visible": false 00:11:08.697 } 00:11:08.697 } 00:11:08.697 } 00:11:08.697 Got JSON-RPC error response 00:11:08.697 GoRPCClient: error on JSON-RPC call 00:11:08.697 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:08.697 19:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:08.697 19:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:08.697 Adding namespace failed - expected result. 00:11:08.697 19:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:08.697 test case2: host connect to nvmf target in multiple paths 00:11:08.697 19:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:08.697 19:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:08.697 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.697 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.697 [2024-07-15 19:26:58.306144] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:08.697 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.697 19:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:08.697 19:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:08.954 19:26:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:08.954 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:08.954 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:08.954 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:08.954 19:26:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:10.851 19:27:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:10.851 19:27:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:10.851 19:27:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:11.108 19:27:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:11.108 19:27:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:11.108 19:27:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:11.108 19:27:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:11.108 [global] 00:11:11.108 thread=1 00:11:11.108 invalidate=1 00:11:11.108 rw=write 00:11:11.108 time_based=1 00:11:11.108 runtime=1 00:11:11.108 ioengine=libaio 00:11:11.108 direct=1 00:11:11.108 bs=4096 00:11:11.108 iodepth=1 00:11:11.108 norandommap=0 00:11:11.108 numjobs=1 00:11:11.108 00:11:11.108 verify_dump=1 00:11:11.108 verify_backlog=512 00:11:11.108 verify_state_save=0 00:11:11.108 do_verify=1 00:11:11.108 verify=crc32c-intel 00:11:11.108 [job0] 00:11:11.108 filename=/dev/nvme0n1 00:11:11.108 Could not set queue depth (nvme0n1) 00:11:11.108 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:11.108 fio-3.35 00:11:11.108 Starting 1 thread 00:11:12.487 00:11:12.487 job0: (groupid=0, jobs=1): err= 0: pid=76644: Mon Jul 15 19:27:01 2024 00:11:12.487 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:12.487 slat (nsec): min=15312, max=64009, avg=17948.75, stdev=3585.23 00:11:12.487 clat (usec): min=132, max=231, avg=151.20, stdev= 9.60 00:11:12.487 lat (usec): min=150, max=252, avg=169.14, stdev=10.88 00:11:12.487 clat percentiles (usec): 00:11:12.487 | 1.00th=[ 139], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 145], 00:11:12.487 | 30.00th=[ 147], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 151], 00:11:12.487 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 169], 00:11:12.487 | 99.00th=[ 184], 99.50th=[ 194], 99.90th=[ 217], 99.95th=[ 223], 00:11:12.487 | 99.99th=[ 231] 00:11:12.487 write: IOPS=3549, BW=13.9MiB/s (14.5MB/s)(13.9MiB/1001msec); 0 zone resets 00:11:12.487 slat (nsec): min=21545, max=70947, avg=25274.75, stdev=4430.90 00:11:12.487 clat (usec): min=92, max=313, avg=106.21, stdev= 9.59 00:11:12.487 lat (usec): min=115, max=339, avg=131.49, stdev=11.29 00:11:12.487 clat percentiles (usec): 00:11:12.487 | 1.00th=[ 96], 5.00th=[ 97], 10.00th=[ 99], 20.00th=[ 100], 00:11:12.487 | 30.00th=[ 102], 40.00th=[ 103], 50.00th=[ 104], 60.00th=[ 106], 00:11:12.487 | 70.00th=[ 109], 80.00th=[ 111], 90.00th=[ 117], 95.00th=[ 123], 00:11:12.487 | 99.00th=[ 135], 99.50th=[ 143], 99.90th=[ 210], 99.95th=[ 249], 00:11:12.487 | 99.99th=[ 314] 00:11:12.487 bw ( KiB/s): min=13920, max=13920, per=98.04%, avg=13920.00, stdev= 0.00, samples=1 00:11:12.487 iops : min= 3480, max= 3480, avg=3480.00, stdev= 0.00, samples=1 00:11:12.487 lat (usec) : 100=10.64%, 250=89.34%, 500=0.02% 00:11:12.487 cpu : usr=3.00%, sys=10.40%, ctx=6625, majf=0, minf=2 00:11:12.487 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:12.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.488 issued rwts: total=3072,3553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:12.488 00:11:12.488 Run status group 0 (all jobs): 00:11:12.488 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:11:12.488 WRITE: bw=13.9MiB/s (14.5MB/s), 13.9MiB/s-13.9MiB/s (14.5MB/s-14.5MB/s), io=13.9MiB (14.6MB), run=1001-1001msec 00:11:12.488 00:11:12.488 Disk stats (read/write): 00:11:12.488 nvme0n1: ios=2897/3072, merge=0/0, ticks=468/366, in_queue=834, util=91.28% 00:11:12.488 19:27:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:12.488 rmmod nvme_tcp 00:11:12.488 rmmod nvme_fabrics 00:11:12.488 rmmod nvme_keyring 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 76533 ']' 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 76533 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 76533 ']' 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 76533 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76533 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:12.488 killing process with pid 76533 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76533' 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 76533 00:11:12.488 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 76533 00:11:12.746 19:27:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:12.746 19:27:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:12.746 19:27:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:12.746 19:27:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:12.746 19:27:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:12.746 19:27:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.746 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:12.746 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.746 19:27:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:12.746 00:11:12.746 real 0m5.745s 00:11:12.746 user 0m19.445s 00:11:12.746 sys 0m1.362s 00:11:12.746 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:12.746 19:27:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:12.746 ************************************ 00:11:12.746 END TEST nvmf_nmic 00:11:12.746 ************************************ 00:11:12.746 19:27:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:12.746 19:27:02 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:12.746 19:27:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:12.746 19:27:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.746 19:27:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:12.746 ************************************ 00:11:12.746 START TEST nvmf_fio_target 00:11:12.746 ************************************ 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:12.746 * Looking for test storage... 00:11:12.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.746 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:12.747 Cannot find device "nvmf_tgt_br" 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:11:12.747 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:13.005 Cannot find device "nvmf_tgt_br2" 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:13.005 Cannot find device "nvmf_tgt_br" 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:13.005 Cannot find device "nvmf_tgt_br2" 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:13.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:13.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:13.005 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:13.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:11:13.263 00:11:13.263 --- 10.0.0.2 ping statistics --- 00:11:13.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.263 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:13.263 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:13.263 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:11:13.263 00:11:13.263 --- 10.0.0.3 ping statistics --- 00:11:13.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.263 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:13.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:11:13.263 00:11:13.263 --- 10.0.0.1 ping statistics --- 00:11:13.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.263 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=76822 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 76822 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 76822 ']' 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:13.263 19:27:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.263 [2024-07-15 19:27:02.890888] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:11:13.264 [2024-07-15 19:27:02.890977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.264 [2024-07-15 19:27:03.026639] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:13.521 [2024-07-15 19:27:03.104623] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.521 [2024-07-15 19:27:03.104684] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.521 [2024-07-15 19:27:03.104699] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.521 [2024-07-15 19:27:03.104710] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.521 [2024-07-15 19:27:03.104719] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.521 [2024-07-15 19:27:03.105463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.521 [2024-07-15 19:27:03.105549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.521 [2024-07-15 19:27:03.105636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.521 [2024-07-15 19:27:03.105644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.086 19:27:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:14.086 19:27:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:11:14.086 19:27:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:14.086 19:27:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:14.086 19:27:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.344 19:27:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.344 19:27:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:14.603 [2024-07-15 19:27:04.157609] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.603 19:27:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:14.861 19:27:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:14.861 19:27:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:15.120 19:27:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:15.120 19:27:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:15.378 19:27:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:15.378 19:27:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:15.637 19:27:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:15.637 19:27:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:15.895 19:27:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:16.153 19:27:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:16.153 19:27:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:16.411 19:27:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:16.411 19:27:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:16.669 19:27:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:16.669 19:27:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:16.927 19:27:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:17.186 19:27:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:17.186 19:27:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:17.444 19:27:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:17.444 19:27:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:17.702 19:27:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.960 [2024-07-15 19:27:07.507503] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.960 19:27:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:18.219 19:27:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:18.477 19:27:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.477 19:27:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:18.477 19:27:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:18.477 19:27:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.477 19:27:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:18.477 19:27:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:18.477 19:27:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:21.019 19:27:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:21.019 19:27:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:21.019 19:27:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:21.019 19:27:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:21.019 19:27:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:21.019 19:27:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:21.019 19:27:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:21.019 [global] 00:11:21.019 thread=1 00:11:21.019 invalidate=1 00:11:21.019 rw=write 00:11:21.019 time_based=1 00:11:21.019 runtime=1 00:11:21.019 ioengine=libaio 00:11:21.019 direct=1 00:11:21.019 bs=4096 00:11:21.019 iodepth=1 00:11:21.019 norandommap=0 00:11:21.019 numjobs=1 00:11:21.019 00:11:21.019 verify_dump=1 00:11:21.019 verify_backlog=512 00:11:21.019 verify_state_save=0 00:11:21.019 do_verify=1 00:11:21.019 verify=crc32c-intel 00:11:21.019 [job0] 00:11:21.019 filename=/dev/nvme0n1 00:11:21.019 [job1] 00:11:21.019 filename=/dev/nvme0n2 00:11:21.019 [job2] 00:11:21.019 filename=/dev/nvme0n3 00:11:21.019 [job3] 00:11:21.019 filename=/dev/nvme0n4 00:11:21.019 Could not set queue depth (nvme0n1) 00:11:21.019 Could not set queue depth (nvme0n2) 00:11:21.019 Could not set queue depth (nvme0n3) 00:11:21.019 Could not set queue depth (nvme0n4) 00:11:21.019 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.019 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.019 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.019 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.019 fio-3.35 00:11:21.019 Starting 4 threads 00:11:21.974 00:11:21.974 job0: (groupid=0, jobs=1): err= 0: pid=77114: Mon Jul 15 19:27:11 2024 00:11:21.974 read: IOPS=2349, BW=9399KiB/s (9624kB/s)(9408KiB/1001msec) 00:11:21.974 slat (nsec): min=12532, max=52723, avg=16363.48, stdev=3658.39 00:11:21.974 clat (usec): min=125, max=1771, avg=203.97, stdev=63.64 00:11:21.974 lat (usec): min=156, max=1789, avg=220.33, stdev=62.31 00:11:21.974 clat percentiles (usec): 00:11:21.974 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:11:21.974 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 182], 00:11:21.974 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 289], 00:11:21.974 | 99.00th=[ 310], 99.50th=[ 338], 99.90th=[ 457], 99.95th=[ 461], 00:11:21.974 | 99.99th=[ 1778] 00:11:21.974 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:21.974 slat (usec): min=16, max=138, avg=25.73, stdev= 5.01 00:11:21.974 clat (usec): min=104, max=8057, avg=158.38, stdev=176.31 00:11:21.974 lat (usec): min=126, max=8082, avg=184.11, stdev=176.21 00:11:21.974 clat percentiles (usec): 00:11:21.974 | 1.00th=[ 110], 5.00th=[ 116], 10.00th=[ 119], 20.00th=[ 123], 00:11:21.974 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 133], 60.00th=[ 137], 00:11:21.974 | 70.00th=[ 147], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 229], 00:11:21.974 | 99.00th=[ 260], 99.50th=[ 371], 99.90th=[ 1991], 99.95th=[ 2278], 00:11:21.974 | 99.99th=[ 8029] 00:11:21.974 bw ( KiB/s): min=12288, max=12288, per=27.82%, avg=12288.00, stdev= 0.00, samples=1 00:11:21.974 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:21.974 lat (usec) : 250=82.72%, 500=17.04%, 750=0.10%, 1000=0.02% 00:11:21.974 lat (msec) : 2=0.08%, 4=0.02%, 10=0.02% 00:11:21.974 cpu : usr=2.70%, sys=7.50%, ctx=4916, majf=0, minf=9 00:11:21.974 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.974 issued rwts: total=2352,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.974 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.974 job1: (groupid=0, jobs=1): err= 0: pid=77115: Mon Jul 15 19:27:11 2024 00:11:21.974 read: IOPS=2348, BW=9395KiB/s (9620kB/s)(9404KiB/1001msec) 00:11:21.974 slat (nsec): min=10050, max=48763, avg=17758.69, stdev=3284.52 00:11:21.974 clat (usec): min=140, max=2390, avg=206.50, stdev=78.49 00:11:21.974 lat (usec): min=155, max=2422, avg=224.26, stdev=77.97 00:11:21.974 clat percentiles (usec): 00:11:21.974 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:11:21.974 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 184], 00:11:21.974 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 285], 00:11:21.974 | 99.00th=[ 314], 99.50th=[ 363], 99.90th=[ 898], 99.95th=[ 1598], 00:11:21.974 | 99.99th=[ 2376] 00:11:21.974 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:21.974 slat (usec): min=15, max=125, avg=26.30, stdev= 5.93 00:11:21.974 clat (usec): min=103, max=725, avg=154.56, stdev=47.56 00:11:21.974 lat (usec): min=129, max=745, avg=180.86, stdev=46.92 00:11:21.974 clat percentiles (usec): 00:11:21.974 | 1.00th=[ 113], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 125], 00:11:21.974 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 139], 00:11:21.975 | 70.00th=[ 153], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 229], 00:11:21.975 | 99.00th=[ 255], 99.50th=[ 293], 99.90th=[ 652], 99.95th=[ 693], 00:11:21.975 | 99.99th=[ 725] 00:11:21.975 bw ( KiB/s): min=12288, max=12288, per=27.82%, avg=12288.00, stdev= 0.00, samples=1 00:11:21.975 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:21.975 lat (usec) : 250=82.75%, 500=16.98%, 750=0.18%, 1000=0.04% 00:11:21.975 lat (msec) : 2=0.02%, 4=0.02% 00:11:21.975 cpu : usr=1.50%, sys=8.80%, ctx=4911, majf=0, minf=7 00:11:21.975 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.975 issued rwts: total=2351,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.975 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.975 job2: (groupid=0, jobs=1): err= 0: pid=77116: Mon Jul 15 19:27:11 2024 00:11:21.975 read: IOPS=2576, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1000msec) 00:11:21.975 slat (nsec): min=14302, max=48399, avg=17697.22, stdev=4346.26 00:11:21.975 clat (usec): min=150, max=1880, avg=175.46, stdev=49.94 00:11:21.975 lat (usec): min=165, max=1897, avg=193.15, stdev=50.26 00:11:21.975 clat percentiles (usec): 00:11:21.975 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:11:21.975 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:11:21.975 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:11:21.975 | 99.00th=[ 208], 99.50th=[ 217], 99.90th=[ 685], 99.95th=[ 1827], 00:11:21.975 | 99.99th=[ 1876] 00:11:21.975 write: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1000msec); 0 zone resets 00:11:21.975 slat (nsec): min=20438, max=92019, avg=24491.29, stdev=4549.47 00:11:21.975 clat (usec): min=113, max=250, avg=135.34, stdev=10.28 00:11:21.975 lat (usec): min=135, max=342, avg=159.83, stdev=12.08 00:11:21.975 clat percentiles (usec): 00:11:21.975 | 1.00th=[ 118], 5.00th=[ 122], 10.00th=[ 125], 20.00th=[ 127], 00:11:21.975 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:11:21.975 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 155], 00:11:21.975 | 99.00th=[ 165], 99.50th=[ 172], 99.90th=[ 180], 99.95th=[ 210], 00:11:21.975 | 99.99th=[ 251] 00:11:21.975 bw ( KiB/s): min=12288, max=12288, per=27.82%, avg=12288.00, stdev= 0.00, samples=1 00:11:21.975 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:21.975 lat (usec) : 250=99.86%, 500=0.09%, 750=0.02% 00:11:21.975 lat (msec) : 2=0.04% 00:11:21.975 cpu : usr=2.80%, sys=8.50%, ctx=5650, majf=0, minf=8 00:11:21.975 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.975 issued rwts: total=2576,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.975 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.975 job3: (groupid=0, jobs=1): err= 0: pid=77117: Mon Jul 15 19:27:11 2024 00:11:21.975 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:21.975 slat (nsec): min=14021, max=62822, avg=17028.77, stdev=3155.43 00:11:21.975 clat (usec): min=149, max=859, avg=181.27, stdev=21.11 00:11:21.975 lat (usec): min=166, max=896, avg=198.30, stdev=21.69 00:11:21.975 clat percentiles (usec): 00:11:21.975 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:11:21.975 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 182], 00:11:21.975 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 204], 00:11:21.975 | 99.00th=[ 219], 99.50th=[ 225], 99.90th=[ 465], 99.95th=[ 553], 00:11:21.975 | 99.99th=[ 857] 00:11:21.975 write: IOPS=2858, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec); 0 zone resets 00:11:21.975 slat (usec): min=20, max=125, avg=27.22, stdev= 9.48 00:11:21.975 clat (usec): min=113, max=256, avg=141.08, stdev=12.63 00:11:21.975 lat (usec): min=138, max=314, avg=168.30, stdev=18.14 00:11:21.975 clat percentiles (usec): 00:11:21.975 | 1.00th=[ 122], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 131], 00:11:21.975 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:11:21.975 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 163], 00:11:21.975 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 231], 99.95th=[ 255], 00:11:21.975 | 99.99th=[ 258] 00:11:21.975 bw ( KiB/s): min=12288, max=12288, per=27.82%, avg=12288.00, stdev= 0.00, samples=1 00:11:21.975 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:21.975 lat (usec) : 250=99.87%, 500=0.09%, 750=0.02%, 1000=0.02% 00:11:21.975 cpu : usr=2.10%, sys=9.30%, ctx=5422, majf=0, minf=11 00:11:21.975 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.975 issued rwts: total=2560,2861,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.975 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.975 00:11:21.975 Run status group 0 (all jobs): 00:11:21.975 READ: bw=38.4MiB/s (40.3MB/s), 9395KiB/s-10.1MiB/s (9620kB/s-10.6MB/s), io=38.4MiB (40.3MB), run=1000-1001msec 00:11:21.975 WRITE: bw=43.1MiB/s (45.2MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=43.2MiB (45.3MB), run=1000-1001msec 00:11:21.975 00:11:21.975 Disk stats (read/write): 00:11:21.975 nvme0n1: ios=2098/2287, merge=0/0, ticks=428/355, in_queue=783, util=86.27% 00:11:21.975 nvme0n2: ios=2089/2286, merge=0/0, ticks=432/366, in_queue=798, util=86.99% 00:11:21.975 nvme0n3: ios=2202/2560, merge=0/0, ticks=400/378, in_queue=778, util=88.88% 00:11:21.975 nvme0n4: ios=2048/2546, merge=0/0, ticks=385/391, in_queue=776, util=89.64% 00:11:21.975 19:27:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:21.975 [global] 00:11:21.975 thread=1 00:11:21.975 invalidate=1 00:11:21.975 rw=randwrite 00:11:21.975 time_based=1 00:11:21.975 runtime=1 00:11:21.975 ioengine=libaio 00:11:21.975 direct=1 00:11:21.975 bs=4096 00:11:21.975 iodepth=1 00:11:21.975 norandommap=0 00:11:21.975 numjobs=1 00:11:21.975 00:11:21.975 verify_dump=1 00:11:21.975 verify_backlog=512 00:11:21.975 verify_state_save=0 00:11:21.975 do_verify=1 00:11:21.975 verify=crc32c-intel 00:11:21.975 [job0] 00:11:21.975 filename=/dev/nvme0n1 00:11:21.975 [job1] 00:11:21.975 filename=/dev/nvme0n2 00:11:21.975 [job2] 00:11:21.975 filename=/dev/nvme0n3 00:11:21.975 [job3] 00:11:21.975 filename=/dev/nvme0n4 00:11:21.975 Could not set queue depth (nvme0n1) 00:11:21.975 Could not set queue depth (nvme0n2) 00:11:21.975 Could not set queue depth (nvme0n3) 00:11:21.975 Could not set queue depth (nvme0n4) 00:11:22.233 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.233 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.233 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.233 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.233 fio-3.35 00:11:22.233 Starting 4 threads 00:11:23.606 00:11:23.606 job0: (groupid=0, jobs=1): err= 0: pid=77176: Mon Jul 15 19:27:12 2024 00:11:23.606 read: IOPS=3004, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1001msec) 00:11:23.606 slat (nsec): min=13705, max=38729, avg=16237.51, stdev=2687.08 00:11:23.606 clat (usec): min=139, max=1844, avg=162.36, stdev=36.50 00:11:23.606 lat (usec): min=153, max=1862, avg=178.60, stdev=36.77 00:11:23.606 clat percentiles (usec): 00:11:23.606 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:11:23.606 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:11:23.606 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 182], 00:11:23.606 | 99.00th=[ 200], 99.50th=[ 208], 99.90th=[ 519], 99.95th=[ 685], 00:11:23.606 | 99.99th=[ 1844] 00:11:23.606 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:23.606 slat (nsec): min=20005, max=79130, avg=23081.99, stdev=4013.17 00:11:23.606 clat (usec): min=98, max=443, avg=123.79, stdev=12.24 00:11:23.606 lat (usec): min=121, max=478, avg=146.87, stdev=13.26 00:11:23.606 clat percentiles (usec): 00:11:23.606 | 1.00th=[ 105], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 116], 00:11:23.606 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 125], 00:11:23.607 | 70.00th=[ 128], 80.00th=[ 131], 90.00th=[ 137], 95.00th=[ 143], 00:11:23.607 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 192], 99.95th=[ 293], 00:11:23.607 | 99.99th=[ 445] 00:11:23.607 bw ( KiB/s): min=12288, max=12288, per=28.89%, avg=12288.00, stdev= 0.00, samples=1 00:11:23.607 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:23.607 lat (usec) : 100=0.03%, 250=99.77%, 500=0.12%, 750=0.07% 00:11:23.607 lat (msec) : 2=0.02% 00:11:23.607 cpu : usr=2.20%, sys=9.00%, ctx=6084, majf=0, minf=15 00:11:23.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:23.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.607 issued rwts: total=3008,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:23.607 job1: (groupid=0, jobs=1): err= 0: pid=77177: Mon Jul 15 19:27:12 2024 00:11:23.607 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:23.607 slat (usec): min=11, max=101, avg=17.50, stdev= 5.13 00:11:23.607 clat (usec): min=104, max=7425, avg=234.79, stdev=194.61 00:11:23.607 lat (usec): min=160, max=7443, avg=252.29, stdev=193.93 00:11:23.607 clat percentiles (usec): 00:11:23.607 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:11:23.607 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 258], 60.00th=[ 277], 00:11:23.607 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 310], 00:11:23.607 | 99.00th=[ 351], 99.50th=[ 545], 99.90th=[ 1844], 99.95th=[ 3523], 00:11:23.607 | 99.99th=[ 7439] 00:11:23.607 write: IOPS=2292, BW=9171KiB/s (9391kB/s)(9180KiB/1001msec); 0 zone resets 00:11:23.607 slat (nsec): min=12896, max=92609, avg=25024.66, stdev=4912.61 00:11:23.607 clat (usec): min=102, max=555, avg=181.64, stdev=53.01 00:11:23.607 lat (usec): min=128, max=576, avg=206.67, stdev=50.75 00:11:23.607 clat percentiles (usec): 00:11:23.607 | 1.00th=[ 115], 5.00th=[ 119], 10.00th=[ 122], 20.00th=[ 126], 00:11:23.607 | 30.00th=[ 131], 40.00th=[ 137], 50.00th=[ 208], 60.00th=[ 219], 00:11:23.607 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 247], 00:11:23.607 | 99.00th=[ 269], 99.50th=[ 302], 99.90th=[ 529], 99.95th=[ 545], 00:11:23.607 | 99.99th=[ 553] 00:11:23.607 bw ( KiB/s): min=12288, max=12288, per=28.89%, avg=12288.00, stdev= 0.00, samples=1 00:11:23.607 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:23.607 lat (usec) : 250=74.60%, 500=25.05%, 750=0.14%, 1000=0.07% 00:11:23.607 lat (msec) : 2=0.09%, 4=0.02%, 10=0.02% 00:11:23.607 cpu : usr=1.60%, sys=7.30%, ctx=4349, majf=0, minf=6 00:11:23.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:23.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.607 issued rwts: total=2048,2295,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:23.607 job2: (groupid=0, jobs=1): err= 0: pid=77178: Mon Jul 15 19:27:12 2024 00:11:23.607 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:23.607 slat (usec): min=11, max=211, avg=15.80, stdev= 5.73 00:11:23.607 clat (usec): min=50, max=1673, avg=233.92, stdev=72.69 00:11:23.607 lat (usec): min=171, max=1685, avg=249.72, stdev=72.46 00:11:23.607 clat percentiles (usec): 00:11:23.607 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:11:23.607 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 255], 60.00th=[ 273], 00:11:23.607 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 310], 00:11:23.607 | 99.00th=[ 347], 99.50th=[ 400], 99.90th=[ 889], 99.95th=[ 1123], 00:11:23.607 | 99.99th=[ 1680] 00:11:23.607 write: IOPS=2284, BW=9139KiB/s (9358kB/s)(9148KiB/1001msec); 0 zone resets 00:11:23.607 slat (nsec): min=12877, max=69371, avg=23135.07, stdev=4136.18 00:11:23.607 clat (usec): min=117, max=605, avg=187.22, stdev=47.42 00:11:23.607 lat (usec): min=138, max=621, avg=210.35, stdev=47.06 00:11:23.607 clat percentiles (usec): 00:11:23.607 | 1.00th=[ 125], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:11:23.607 | 30.00th=[ 143], 40.00th=[ 151], 50.00th=[ 210], 60.00th=[ 219], 00:11:23.607 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 237], 95.00th=[ 245], 00:11:23.607 | 99.00th=[ 269], 99.50th=[ 310], 99.90th=[ 424], 99.95th=[ 429], 00:11:23.607 | 99.99th=[ 603] 00:11:23.607 bw ( KiB/s): min=12288, max=12288, per=28.89%, avg=12288.00, stdev= 0.00, samples=1 00:11:23.607 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:23.607 lat (usec) : 100=0.02%, 250=74.21%, 500=25.58%, 750=0.09%, 1000=0.05% 00:11:23.607 lat (msec) : 2=0.05% 00:11:23.607 cpu : usr=1.70%, sys=6.40%, ctx=4341, majf=0, minf=9 00:11:23.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:23.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.607 issued rwts: total=2048,2287,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:23.607 job3: (groupid=0, jobs=1): err= 0: pid=77179: Mon Jul 15 19:27:12 2024 00:11:23.607 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:23.607 slat (nsec): min=13680, max=34958, avg=15814.64, stdev=2455.84 00:11:23.607 clat (usec): min=155, max=1555, avg=180.20, stdev=29.53 00:11:23.607 lat (usec): min=170, max=1570, avg=196.02, stdev=29.71 00:11:23.607 clat percentiles (usec): 00:11:23.607 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 169], 00:11:23.607 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:11:23.607 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 200], 00:11:23.607 | 99.00th=[ 210], 99.50th=[ 217], 99.90th=[ 293], 99.95th=[ 297], 00:11:23.607 | 99.99th=[ 1549] 00:11:23.607 write: IOPS=2987, BW=11.7MiB/s (12.2MB/s)(11.7MiB/1001msec); 0 zone resets 00:11:23.607 slat (nsec): min=19384, max=80238, avg=23183.65, stdev=4971.09 00:11:23.607 clat (usec): min=107, max=461, avg=140.09, stdev=12.46 00:11:23.607 lat (usec): min=129, max=483, avg=163.27, stdev=14.04 00:11:23.607 clat percentiles (usec): 00:11:23.607 | 1.00th=[ 120], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 131], 00:11:23.607 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:11:23.607 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 161], 00:11:23.607 | 99.00th=[ 172], 99.50th=[ 180], 99.90th=[ 204], 99.95th=[ 217], 00:11:23.607 | 99.99th=[ 461] 00:11:23.607 bw ( KiB/s): min=12288, max=12288, per=28.89%, avg=12288.00, stdev= 0.00, samples=1 00:11:23.607 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:23.607 lat (usec) : 250=99.93%, 500=0.05% 00:11:23.607 lat (msec) : 2=0.02% 00:11:23.607 cpu : usr=1.80%, sys=8.30%, ctx=5550, majf=0, minf=15 00:11:23.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:23.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.607 issued rwts: total=2560,2990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:23.607 00:11:23.607 Run status group 0 (all jobs): 00:11:23.607 READ: bw=37.7MiB/s (39.5MB/s), 8184KiB/s-11.7MiB/s (8380kB/s-12.3MB/s), io=37.8MiB (39.6MB), run=1001-1001msec 00:11:23.607 WRITE: bw=41.5MiB/s (43.6MB/s), 9139KiB/s-12.0MiB/s (9358kB/s-12.6MB/s), io=41.6MiB (43.6MB), run=1001-1001msec 00:11:23.607 00:11:23.607 Disk stats (read/write): 00:11:23.607 nvme0n1: ios=2610/2666, merge=0/0, ticks=459/359, in_queue=818, util=88.68% 00:11:23.607 nvme0n2: ios=1847/2048, merge=0/0, ticks=434/378, in_queue=812, util=87.66% 00:11:23.607 nvme0n3: ios=1794/2048, merge=0/0, ticks=412/399, in_queue=811, util=89.25% 00:11:23.607 nvme0n4: ios=2230/2560, merge=0/0, ticks=414/389, in_queue=803, util=89.80% 00:11:23.607 19:27:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:23.607 [global] 00:11:23.607 thread=1 00:11:23.607 invalidate=1 00:11:23.607 rw=write 00:11:23.607 time_based=1 00:11:23.607 runtime=1 00:11:23.607 ioengine=libaio 00:11:23.607 direct=1 00:11:23.607 bs=4096 00:11:23.607 iodepth=128 00:11:23.607 norandommap=0 00:11:23.607 numjobs=1 00:11:23.607 00:11:23.607 verify_dump=1 00:11:23.607 verify_backlog=512 00:11:23.607 verify_state_save=0 00:11:23.607 do_verify=1 00:11:23.607 verify=crc32c-intel 00:11:23.607 [job0] 00:11:23.607 filename=/dev/nvme0n1 00:11:23.607 [job1] 00:11:23.607 filename=/dev/nvme0n2 00:11:23.607 [job2] 00:11:23.607 filename=/dev/nvme0n3 00:11:23.607 [job3] 00:11:23.607 filename=/dev/nvme0n4 00:11:23.607 Could not set queue depth (nvme0n1) 00:11:23.607 Could not set queue depth (nvme0n2) 00:11:23.607 Could not set queue depth (nvme0n3) 00:11:23.607 Could not set queue depth (nvme0n4) 00:11:23.607 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:23.607 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:23.607 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:23.607 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:23.607 fio-3.35 00:11:23.607 Starting 4 threads 00:11:24.541 00:11:24.541 job0: (groupid=0, jobs=1): err= 0: pid=77233: Mon Jul 15 19:27:14 2024 00:11:24.541 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:11:24.541 slat (usec): min=4, max=12021, avg=242.66, stdev=1115.06 00:11:24.541 clat (usec): min=20906, max=59119, avg=32569.60, stdev=6907.62 00:11:24.541 lat (usec): min=20920, max=59690, avg=32812.25, stdev=6985.09 00:11:24.541 clat percentiles (usec): 00:11:24.541 | 1.00th=[21890], 5.00th=[23987], 10.00th=[24511], 20.00th=[25297], 00:11:24.541 | 30.00th=[27657], 40.00th=[29230], 50.00th=[31327], 60.00th=[33817], 00:11:24.541 | 70.00th=[36439], 80.00th=[39060], 90.00th=[43254], 95.00th=[44303], 00:11:24.541 | 99.00th=[47973], 99.50th=[49021], 99.90th=[52691], 99.95th=[52691], 00:11:24.541 | 99.99th=[58983] 00:11:24.541 write: IOPS=2177, BW=8710KiB/s (8919kB/s)(8788KiB/1009msec); 0 zone resets 00:11:24.541 slat (usec): min=3, max=9717, avg=219.65, stdev=884.39 00:11:24.541 clat (usec): min=7906, max=60742, avg=27482.63, stdev=7592.47 00:11:24.541 lat (usec): min=8935, max=60779, avg=27702.27, stdev=7642.04 00:11:24.541 clat percentiles (usec): 00:11:24.541 | 1.00th=[17433], 5.00th=[19530], 10.00th=[20841], 20.00th=[22676], 00:11:24.541 | 30.00th=[23725], 40.00th=[24511], 50.00th=[25560], 60.00th=[26870], 00:11:24.541 | 70.00th=[28443], 80.00th=[30540], 90.00th=[36439], 95.00th=[47449], 00:11:24.541 | 99.00th=[53740], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:11:24.541 | 99.99th=[60556] 00:11:24.541 bw ( KiB/s): min= 7232, max= 9328, per=15.39%, avg=8280.00, stdev=1482.10, samples=2 00:11:24.541 iops : min= 1808, max= 2332, avg=2070.00, stdev=370.52, samples=2 00:11:24.541 lat (msec) : 10=0.21%, 20=2.97%, 50=94.65%, 100=2.17% 00:11:24.541 cpu : usr=2.58%, sys=6.75%, ctx=707, majf=0, minf=11 00:11:24.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:11:24.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.541 issued rwts: total=2048,2197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.541 job1: (groupid=0, jobs=1): err= 0: pid=77234: Mon Jul 15 19:27:14 2024 00:11:24.541 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:11:24.541 slat (usec): min=8, max=6151, avg=76.03, stdev=353.61 00:11:24.541 clat (usec): min=7731, max=18801, avg=10264.18, stdev=1203.46 00:11:24.541 lat (usec): min=7971, max=19268, avg=10340.21, stdev=1174.75 00:11:24.541 clat percentiles (usec): 00:11:24.541 | 1.00th=[ 8160], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[ 9896], 00:11:24.541 | 30.00th=[10028], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:11:24.541 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10945], 95.00th=[11469], 00:11:24.542 | 99.00th=[16319], 99.50th=[18220], 99.90th=[18744], 99.95th=[18744], 00:11:24.542 | 99.99th=[18744] 00:11:24.542 write: IOPS=6565, BW=25.6MiB/s (26.9MB/s)(25.7MiB/1002msec); 0 zone resets 00:11:24.542 slat (usec): min=11, max=4221, avg=73.93, stdev=294.85 00:11:24.542 clat (usec): min=251, max=15414, avg=9677.39, stdev=1135.18 00:11:24.542 lat (usec): min=1970, max=15446, avg=9751.32, stdev=1140.79 00:11:24.542 clat percentiles (usec): 00:11:24.542 | 1.00th=[ 5473], 5.00th=[ 8291], 10.00th=[ 8455], 20.00th=[ 8717], 00:11:24.542 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10159], 00:11:24.542 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10814], 95.00th=[11076], 00:11:24.542 | 99.00th=[11338], 99.50th=[11731], 99.90th=[13960], 99.95th=[14746], 00:11:24.542 | 99.99th=[15401] 00:11:24.542 bw ( KiB/s): min=24632, max=26976, per=47.97%, avg=25804.00, stdev=1657.46, samples=2 00:11:24.542 iops : min= 6158, max= 6744, avg=6451.00, stdev=414.36, samples=2 00:11:24.542 lat (usec) : 500=0.01% 00:11:24.542 lat (msec) : 2=0.02%, 4=0.26%, 10=41.53%, 20=58.19% 00:11:24.542 cpu : usr=5.69%, sys=16.08%, ctx=735, majf=0, minf=5 00:11:24.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:24.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.542 issued rwts: total=6144,6579,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.542 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.542 job2: (groupid=0, jobs=1): err= 0: pid=77235: Mon Jul 15 19:27:14 2024 00:11:24.542 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 00:11:24.542 slat (usec): min=6, max=14500, avg=213.17, stdev=1205.28 00:11:24.542 clat (usec): min=13617, max=49153, avg=25720.33, stdev=7936.40 00:11:24.542 lat (usec): min=13649, max=49176, avg=25933.50, stdev=8035.94 00:11:24.542 clat percentiles (usec): 00:11:24.542 | 1.00th=[13698], 5.00th=[16188], 10.00th=[16450], 20.00th=[16909], 00:11:24.542 | 30.00th=[19530], 40.00th=[22676], 50.00th=[25297], 60.00th=[26608], 00:11:24.542 | 70.00th=[30016], 80.00th=[32900], 90.00th=[37487], 95.00th=[40109], 00:11:24.542 | 99.00th=[44303], 99.50th=[46400], 99.90th=[47973], 99.95th=[47973], 00:11:24.542 | 99.99th=[49021] 00:11:24.542 write: IOPS=2220, BW=8883KiB/s (9096kB/s)(8936KiB/1006msec); 0 zone resets 00:11:24.542 slat (usec): min=17, max=6767, avg=242.57, stdev=804.49 00:11:24.542 clat (usec): min=4830, max=58657, avg=33184.36, stdev=11161.29 00:11:24.542 lat (usec): min=5959, max=58689, avg=33426.93, stdev=11222.79 00:11:24.542 clat percentiles (usec): 00:11:24.542 | 1.00th=[ 8586], 5.00th=[18482], 10.00th=[20841], 20.00th=[22938], 00:11:24.542 | 30.00th=[26084], 40.00th=[30278], 50.00th=[32113], 60.00th=[33817], 00:11:24.542 | 70.00th=[38011], 80.00th=[43254], 90.00th=[50594], 95.00th=[54789], 00:11:24.542 | 99.00th=[56886], 99.50th=[58459], 99.90th=[58459], 99.95th=[58459], 00:11:24.542 | 99.99th=[58459] 00:11:24.542 bw ( KiB/s): min= 8192, max= 8656, per=15.66%, avg=8424.00, stdev=328.10, samples=2 00:11:24.542 iops : min= 2048, max= 2164, avg=2106.00, stdev=82.02, samples=2 00:11:24.542 lat (msec) : 10=0.75%, 20=18.19%, 50=74.99%, 100=6.07% 00:11:24.542 cpu : usr=2.79%, sys=7.86%, ctx=285, majf=0, minf=17 00:11:24.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:11:24.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.542 issued rwts: total=2048,2234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.542 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.542 job3: (groupid=0, jobs=1): err= 0: pid=77236: Mon Jul 15 19:27:14 2024 00:11:24.542 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8224KiB/1008msec) 00:11:24.542 slat (usec): min=3, max=13651, avg=225.67, stdev=1013.34 00:11:24.542 clat (usec): min=5833, max=45085, avg=28189.13, stdev=6227.33 00:11:24.542 lat (usec): min=10722, max=45110, avg=28414.80, stdev=6303.10 00:11:24.542 clat percentiles (usec): 00:11:24.542 | 1.00th=[13042], 5.00th=[17695], 10.00th=[21103], 20.00th=[23725], 00:11:24.542 | 30.00th=[24773], 40.00th=[25822], 50.00th=[27919], 60.00th=[30278], 00:11:24.542 | 70.00th=[31327], 80.00th=[33817], 90.00th=[36439], 95.00th=[38011], 00:11:24.542 | 99.00th=[40109], 99.50th=[40633], 99.90th=[44827], 99.95th=[44827], 00:11:24.542 | 99.99th=[44827] 00:11:24.542 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:11:24.542 slat (usec): min=5, max=9006, avg=202.18, stdev=764.47 00:11:24.542 clat (usec): min=10790, max=60488, avg=27092.17, stdev=10305.58 00:11:24.542 lat (usec): min=10815, max=60521, avg=27294.34, stdev=10368.55 00:11:24.542 clat percentiles (usec): 00:11:24.542 | 1.00th=[11600], 5.00th=[12256], 10.00th=[13042], 20.00th=[15401], 00:11:24.542 | 30.00th=[22938], 40.00th=[25035], 50.00th=[26346], 60.00th=[28443], 00:11:24.542 | 70.00th=[30278], 80.00th=[33162], 90.00th=[41157], 95.00th=[49021], 00:11:24.542 | 99.00th=[55837], 99.50th=[56886], 99.90th=[57410], 99.95th=[57410], 00:11:24.542 | 99.99th=[60556] 00:11:24.542 bw ( KiB/s): min= 7576, max=11944, per=18.14%, avg=9760.00, stdev=3088.64, samples=2 00:11:24.542 iops : min= 1894, max= 2986, avg=2440.00, stdev=772.16, samples=2 00:11:24.542 lat (msec) : 10=0.02%, 20=16.29%, 50=81.50%, 100=2.19% 00:11:24.542 cpu : usr=1.49%, sys=8.44%, ctx=669, majf=0, minf=14 00:11:24.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:11:24.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.542 issued rwts: total=2056,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.542 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.542 00:11:24.542 Run status group 0 (all jobs): 00:11:24.542 READ: bw=47.6MiB/s (49.9MB/s), 8119KiB/s-24.0MiB/s (8314kB/s-25.1MB/s), io=48.0MiB (50.4MB), run=1002-1009msec 00:11:24.542 WRITE: bw=52.5MiB/s (55.1MB/s), 8710KiB/s-25.6MiB/s (8919kB/s-26.9MB/s), io=53.0MiB (55.6MB), run=1002-1009msec 00:11:24.542 00:11:24.542 Disk stats (read/write): 00:11:24.542 nvme0n1: ios=1795/2048, merge=0/0, ticks=17051/17098, in_queue=34149, util=89.08% 00:11:24.542 nvme0n2: ios=5426/5632, merge=0/0, ticks=13439/12868, in_queue=26307, util=90.51% 00:11:24.542 nvme0n3: ios=1553/1999, merge=0/0, ticks=13419/21174, in_queue=34593, util=89.56% 00:11:24.542 nvme0n4: ios=2041/2048, merge=0/0, ticks=18275/15371, in_queue=33646, util=90.12% 00:11:24.800 19:27:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:24.800 [global] 00:11:24.800 thread=1 00:11:24.800 invalidate=1 00:11:24.800 rw=randwrite 00:11:24.800 time_based=1 00:11:24.800 runtime=1 00:11:24.800 ioengine=libaio 00:11:24.800 direct=1 00:11:24.800 bs=4096 00:11:24.800 iodepth=128 00:11:24.800 norandommap=0 00:11:24.800 numjobs=1 00:11:24.800 00:11:24.800 verify_dump=1 00:11:24.800 verify_backlog=512 00:11:24.800 verify_state_save=0 00:11:24.800 do_verify=1 00:11:24.800 verify=crc32c-intel 00:11:24.800 [job0] 00:11:24.800 filename=/dev/nvme0n1 00:11:24.800 [job1] 00:11:24.800 filename=/dev/nvme0n2 00:11:24.800 [job2] 00:11:24.800 filename=/dev/nvme0n3 00:11:24.800 [job3] 00:11:24.800 filename=/dev/nvme0n4 00:11:24.800 Could not set queue depth (nvme0n1) 00:11:24.800 Could not set queue depth (nvme0n2) 00:11:24.800 Could not set queue depth (nvme0n3) 00:11:24.800 Could not set queue depth (nvme0n4) 00:11:24.800 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.800 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.800 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.800 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.800 fio-3.35 00:11:24.800 Starting 4 threads 00:11:26.179 00:11:26.179 job0: (groupid=0, jobs=1): err= 0: pid=77293: Mon Jul 15 19:27:15 2024 00:11:26.179 read: IOPS=5601, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1005msec) 00:11:26.179 slat (usec): min=3, max=10071, avg=96.86, stdev=606.26 00:11:26.179 clat (usec): min=4263, max=21379, avg=12157.81, stdev=3055.50 00:11:26.179 lat (usec): min=4288, max=21392, avg=12254.67, stdev=3083.21 00:11:26.179 clat percentiles (usec): 00:11:26.179 | 1.00th=[ 5407], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9896], 00:11:26.179 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11600], 00:11:26.179 | 70.00th=[13042], 80.00th=[14353], 90.00th=[17171], 95.00th=[18744], 00:11:26.179 | 99.00th=[20579], 99.50th=[20841], 99.90th=[21365], 99.95th=[21365], 00:11:26.179 | 99.99th=[21365] 00:11:26.179 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:11:26.179 slat (usec): min=5, max=8434, avg=72.99, stdev=255.13 00:11:26.179 clat (usec): min=3909, max=21339, avg=10429.57, stdev=2295.23 00:11:26.179 lat (usec): min=3932, max=21347, avg=10502.56, stdev=2310.23 00:11:26.179 clat percentiles (usec): 00:11:26.179 | 1.00th=[ 4555], 5.00th=[ 5407], 10.00th=[ 6128], 20.00th=[ 8717], 00:11:26.179 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:11:26.179 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11994], 95.00th=[12256], 00:11:26.179 | 99.00th=[12518], 99.50th=[12780], 99.90th=[21103], 99.95th=[21365], 00:11:26.179 | 99.99th=[21365] 00:11:26.179 bw ( KiB/s): min=20752, max=24352, per=34.84%, avg=22552.00, stdev=2545.58, samples=2 00:11:26.179 iops : min= 5188, max= 6088, avg=5638.00, stdev=636.40, samples=2 00:11:26.179 lat (msec) : 4=0.05%, 10=24.16%, 20=74.45%, 50=1.34% 00:11:26.179 cpu : usr=4.58%, sys=13.84%, ctx=868, majf=0, minf=8 00:11:26.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:26.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:26.179 issued rwts: total=5630,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:26.179 job1: (groupid=0, jobs=1): err= 0: pid=77294: Mon Jul 15 19:27:15 2024 00:11:26.179 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:11:26.179 slat (usec): min=3, max=6784, avg=190.99, stdev=863.77 00:11:26.179 clat (usec): min=15421, max=32697, avg=23058.33, stdev=2977.28 00:11:26.179 lat (usec): min=15446, max=32712, avg=23249.31, stdev=3054.11 00:11:26.179 clat percentiles (usec): 00:11:26.179 | 1.00th=[15926], 5.00th=[17171], 10.00th=[19530], 20.00th=[21103], 00:11:26.179 | 30.00th=[22152], 40.00th=[22414], 50.00th=[23200], 60.00th=[23462], 00:11:26.179 | 70.00th=[23725], 80.00th=[24511], 90.00th=[27132], 95.00th=[29230], 00:11:26.179 | 99.00th=[30802], 99.50th=[31589], 99.90th=[32637], 99.95th=[32637], 00:11:26.179 | 99.99th=[32637] 00:11:26.179 write: IOPS=2797, BW=10.9MiB/s (11.5MB/s)(11.0MiB/1008msec); 0 zone resets 00:11:26.179 slat (usec): min=5, max=11029, avg=173.61, stdev=707.25 00:11:26.179 clat (usec): min=6766, max=35140, avg=24101.75, stdev=3469.97 00:11:26.180 lat (usec): min=7246, max=35174, avg=24275.36, stdev=3530.30 00:11:26.180 clat percentiles (usec): 00:11:26.180 | 1.00th=[10814], 5.00th=[17957], 10.00th=[19792], 20.00th=[22938], 00:11:26.180 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:11:26.180 | 70.00th=[25035], 80.00th=[25297], 90.00th=[27657], 95.00th=[29754], 00:11:26.180 | 99.00th=[32637], 99.50th=[33162], 99.90th=[33817], 99.95th=[33817], 00:11:26.180 | 99.99th=[35390] 00:11:26.180 bw ( KiB/s): min= 9508, max=12016, per=16.63%, avg=10762.00, stdev=1773.42, samples=2 00:11:26.180 iops : min= 2377, max= 3004, avg=2690.50, stdev=443.36, samples=2 00:11:26.180 lat (msec) : 10=0.37%, 20=10.78%, 50=88.85% 00:11:26.180 cpu : usr=2.68%, sys=7.35%, ctx=984, majf=0, minf=13 00:11:26.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:26.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:26.180 issued rwts: total=2560,2820,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:26.180 job2: (groupid=0, jobs=1): err= 0: pid=77295: Mon Jul 15 19:27:15 2024 00:11:26.180 read: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec) 00:11:26.180 slat (usec): min=3, max=9860, avg=194.82, stdev=898.15 00:11:26.180 clat (usec): min=14980, max=35918, avg=24064.33, stdev=3442.03 00:11:26.180 lat (usec): min=14992, max=35936, avg=24259.15, stdev=3519.00 00:11:26.180 clat percentiles (usec): 00:11:26.180 | 1.00th=[15664], 5.00th=[18220], 10.00th=[20055], 20.00th=[21890], 00:11:26.180 | 30.00th=[22676], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:11:26.180 | 70.00th=[24773], 80.00th=[27132], 90.00th=[28967], 95.00th=[30016], 00:11:26.180 | 99.00th=[33162], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:11:26.180 | 99.99th=[35914] 00:11:26.180 write: IOPS=2782, BW=10.9MiB/s (11.4MB/s)(11.0MiB/1013msec); 0 zone resets 00:11:26.180 slat (usec): min=4, max=10100, avg=171.02, stdev=676.24 00:11:26.180 clat (usec): min=9191, max=36852, avg=23518.40, stdev=3772.65 00:11:26.180 lat (usec): min=11637, max=36897, avg=23689.43, stdev=3825.89 00:11:26.180 clat percentiles (usec): 00:11:26.180 | 1.00th=[12518], 5.00th=[16057], 10.00th=[17957], 20.00th=[21103], 00:11:26.180 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:11:26.180 | 70.00th=[25035], 80.00th=[25560], 90.00th=[27132], 95.00th=[29230], 00:11:26.180 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[35914], 00:11:26.180 | 99.99th=[36963] 00:11:26.180 bw ( KiB/s): min= 9280, max=12272, per=16.65%, avg=10776.00, stdev=2115.66, samples=2 00:11:26.180 iops : min= 2320, max= 3068, avg=2694.00, stdev=528.92, samples=2 00:11:26.180 lat (msec) : 10=0.02%, 20=13.79%, 50=86.19% 00:11:26.180 cpu : usr=2.57%, sys=7.61%, ctx=945, majf=0, minf=15 00:11:26.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:26.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:26.180 issued rwts: total=2560,2819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:26.180 job3: (groupid=0, jobs=1): err= 0: pid=77296: Mon Jul 15 19:27:15 2024 00:11:26.180 read: IOPS=4822, BW=18.8MiB/s (19.8MB/s)(19.0MiB/1007msec) 00:11:26.180 slat (usec): min=4, max=11360, avg=107.40, stdev=681.14 00:11:26.180 clat (usec): min=3142, max=23934, avg=13602.54, stdev=3285.56 00:11:26.180 lat (usec): min=5006, max=23947, avg=13709.95, stdev=3315.25 00:11:26.180 clat percentiles (usec): 00:11:26.180 | 1.00th=[ 5997], 5.00th=[10159], 10.00th=[10421], 20.00th=[10814], 00:11:26.180 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12649], 60.00th=[13435], 00:11:26.180 | 70.00th=[14746], 80.00th=[15533], 90.00th=[18220], 95.00th=[20841], 00:11:26.180 | 99.00th=[22938], 99.50th=[23462], 99.90th=[23987], 99.95th=[23987], 00:11:26.180 | 99.99th=[23987] 00:11:26.180 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:11:26.180 slat (usec): min=4, max=10282, avg=86.46, stdev=412.66 00:11:26.180 clat (usec): min=3972, max=23796, avg=12022.16, stdev=2516.84 00:11:26.180 lat (usec): min=3995, max=23806, avg=12108.61, stdev=2551.51 00:11:26.180 clat percentiles (usec): 00:11:26.180 | 1.00th=[ 5276], 5.00th=[ 5997], 10.00th=[ 7242], 20.00th=[10552], 00:11:26.180 | 30.00th=[11994], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:11:26.180 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13829], 95.00th=[13960], 00:11:26.180 | 99.00th=[14353], 99.50th=[14615], 99.90th=[23462], 99.95th=[23725], 00:11:26.180 | 99.99th=[23725] 00:11:26.180 bw ( KiB/s): min=20480, max=20480, per=31.64%, avg=20480.00, stdev= 0.00, samples=2 00:11:26.180 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:11:26.180 lat (msec) : 4=0.03%, 10=10.32%, 20=86.02%, 50=3.63% 00:11:26.180 cpu : usr=4.87%, sys=11.83%, ctx=721, majf=0, minf=9 00:11:26.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:26.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:26.180 issued rwts: total=4856,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:26.180 00:11:26.180 Run status group 0 (all jobs): 00:11:26.180 READ: bw=60.2MiB/s (63.1MB/s), 9.87MiB/s-21.9MiB/s (10.4MB/s-22.9MB/s), io=61.0MiB (63.9MB), run=1005-1013msec 00:11:26.180 WRITE: bw=63.2MiB/s (66.3MB/s), 10.9MiB/s-21.9MiB/s (11.4MB/s-23.0MB/s), io=64.0MiB (67.1MB), run=1005-1013msec 00:11:26.180 00:11:26.180 Disk stats (read/write): 00:11:26.180 nvme0n1: ios=4658/5063, merge=0/0, ticks=51657/51405, in_queue=103062, util=87.86% 00:11:26.180 nvme0n2: ios=2094/2460, merge=0/0, ticks=23064/27924, in_queue=50988, util=87.54% 00:11:26.180 nvme0n3: ios=2048/2502, merge=0/0, ticks=23911/27262, in_queue=51173, util=88.90% 00:11:26.180 nvme0n4: ios=4096/4423, merge=0/0, ticks=51558/50933, in_queue=102491, util=89.77% 00:11:26.180 19:27:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:26.180 19:27:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=77310 00:11:26.180 19:27:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:26.180 19:27:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:26.180 [global] 00:11:26.180 thread=1 00:11:26.180 invalidate=1 00:11:26.180 rw=read 00:11:26.180 time_based=1 00:11:26.180 runtime=10 00:11:26.180 ioengine=libaio 00:11:26.180 direct=1 00:11:26.180 bs=4096 00:11:26.180 iodepth=1 00:11:26.180 norandommap=1 00:11:26.180 numjobs=1 00:11:26.180 00:11:26.180 [job0] 00:11:26.180 filename=/dev/nvme0n1 00:11:26.180 [job1] 00:11:26.180 filename=/dev/nvme0n2 00:11:26.180 [job2] 00:11:26.180 filename=/dev/nvme0n3 00:11:26.180 [job3] 00:11:26.180 filename=/dev/nvme0n4 00:11:26.180 Could not set queue depth (nvme0n1) 00:11:26.180 Could not set queue depth (nvme0n2) 00:11:26.180 Could not set queue depth (nvme0n3) 00:11:26.180 Could not set queue depth (nvme0n4) 00:11:26.180 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:26.180 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:26.180 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:26.180 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:26.180 fio-3.35 00:11:26.180 Starting 4 threads 00:11:29.463 19:27:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:29.463 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=47611904, buflen=4096 00:11:29.463 fio: pid=77359, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:29.463 19:27:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:29.721 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=44130304, buflen=4096 00:11:29.721 fio: pid=77358, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:29.721 19:27:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.721 19:27:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:29.979 fio: pid=77356, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:29.979 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=56483840, buflen=4096 00:11:29.979 19:27:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.979 19:27:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:29.979 fio: pid=77357, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:29.979 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=57458688, buflen=4096 00:11:30.237 19:27:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:30.237 19:27:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:30.237 00:11:30.237 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77356: Mon Jul 15 19:27:19 2024 00:11:30.237 read: IOPS=3968, BW=15.5MiB/s (16.3MB/s)(53.9MiB/3475msec) 00:11:30.237 slat (usec): min=9, max=9233, avg=17.90, stdev=126.86 00:11:30.237 clat (usec): min=138, max=7206, avg=232.40, stdev=115.62 00:11:30.237 lat (usec): min=155, max=9458, avg=250.30, stdev=171.50 00:11:30.237 clat percentiles (usec): 00:11:30.237 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:11:30.237 | 30.00th=[ 172], 40.00th=[ 237], 50.00th=[ 251], 60.00th=[ 260], 00:11:30.237 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 293], 00:11:30.237 | 99.00th=[ 367], 99.50th=[ 482], 99.90th=[ 1123], 99.95th=[ 2638], 00:11:30.237 | 99.99th=[ 4490] 00:11:30.237 bw ( KiB/s): min=13479, max=22112, per=29.56%, avg=15953.17, stdev=3301.56, samples=6 00:11:30.237 iops : min= 3369, max= 5528, avg=3988.17, stdev=825.50, samples=6 00:11:30.237 lat (usec) : 250=49.51%, 500=50.05%, 750=0.27%, 1000=0.05% 00:11:30.237 lat (msec) : 2=0.04%, 4=0.05%, 10=0.01% 00:11:30.237 cpu : usr=1.38%, sys=5.30%, ctx=13798, majf=0, minf=1 00:11:30.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.237 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.237 issued rwts: total=13791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.237 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77357: Mon Jul 15 19:27:19 2024 00:11:30.237 read: IOPS=3769, BW=14.7MiB/s (15.4MB/s)(54.8MiB/3722msec) 00:11:30.237 slat (usec): min=14, max=16545, avg=23.68, stdev=229.66 00:11:30.237 clat (usec): min=3, max=7470, avg=239.68, stdev=128.11 00:11:30.237 lat (usec): min=141, max=16791, avg=263.36, stdev=262.45 00:11:30.237 clat percentiles (usec): 00:11:30.237 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 151], 00:11:30.237 | 30.00th=[ 217], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 273], 00:11:30.237 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 302], 00:11:30.237 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 668], 99.95th=[ 1123], 00:11:30.237 | 99.99th=[ 7439] 00:11:30.237 bw ( KiB/s): min=12648, max=21873, per=27.06%, avg=14601.43, stdev=3239.13, samples=7 00:11:30.237 iops : min= 3162, max= 5468, avg=3650.29, stdev=809.71, samples=7 00:11:30.237 lat (usec) : 4=0.02%, 50=0.01%, 100=0.02%, 250=30.90%, 500=68.89% 00:11:30.237 lat (usec) : 750=0.09% 00:11:30.237 lat (msec) : 2=0.03%, 4=0.01%, 10=0.02% 00:11:30.237 cpu : usr=1.24%, sys=5.97%, ctx=14063, majf=0, minf=1 00:11:30.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.237 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.237 issued rwts: total=14029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.237 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77358: Mon Jul 15 19:27:19 2024 00:11:30.237 read: IOPS=3326, BW=13.0MiB/s (13.6MB/s)(42.1MiB/3239msec) 00:11:30.237 slat (usec): min=9, max=11208, avg=19.60, stdev=131.44 00:11:30.237 clat (usec): min=150, max=5997, avg=278.83, stdev=85.49 00:11:30.237 lat (usec): min=168, max=11442, avg=298.43, stdev=156.58 00:11:30.237 clat percentiles (usec): 00:11:30.237 | 1.00th=[ 172], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 262], 00:11:30.237 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 281], 00:11:30.237 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 310], 00:11:30.237 | 99.00th=[ 334], 99.50th=[ 416], 99.90th=[ 1029], 99.95th=[ 2089], 00:11:30.237 | 99.99th=[ 3720] 00:11:30.237 bw ( KiB/s): min=12656, max=13888, per=24.91%, avg=13445.00, stdev=456.82, samples=6 00:11:30.237 iops : min= 3164, max= 3472, avg=3361.17, stdev=114.18, samples=6 00:11:30.237 lat (usec) : 250=3.45%, 500=96.25%, 750=0.14%, 1000=0.05% 00:11:30.237 lat (msec) : 2=0.05%, 4=0.05%, 10=0.01% 00:11:30.237 cpu : usr=1.30%, sys=5.00%, ctx=10781, majf=0, minf=1 00:11:30.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.237 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.237 issued rwts: total=10775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.237 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77359: Mon Jul 15 19:27:19 2024 00:11:30.237 read: IOPS=3914, BW=15.3MiB/s (16.0MB/s)(45.4MiB/2970msec) 00:11:30.237 slat (nsec): min=8894, max=79819, avg=16350.78, stdev=4333.38 00:11:30.237 clat (usec): min=153, max=6583, avg=237.28, stdev=78.02 00:11:30.237 lat (usec): min=168, max=6598, avg=253.63, stdev=77.77 00:11:30.237 clat percentiles (usec): 00:11:30.237 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 180], 00:11:30.237 | 30.00th=[ 202], 40.00th=[ 237], 50.00th=[ 249], 60.00th=[ 262], 00:11:30.237 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 289], 00:11:30.237 | 99.00th=[ 351], 99.50th=[ 429], 99.90th=[ 578], 99.95th=[ 873], 00:11:30.237 | 99.99th=[ 1549] 00:11:30.237 bw ( KiB/s): min=14136, max=20128, per=29.66%, avg=16009.60, stdev=2614.08, samples=5 00:11:30.238 iops : min= 3534, max= 5032, avg=4002.40, stdev=653.52, samples=5 00:11:30.238 lat (usec) : 250=50.46%, 500=49.26%, 750=0.21%, 1000=0.03% 00:11:30.238 lat (msec) : 2=0.03%, 10=0.01% 00:11:30.238 cpu : usr=1.18%, sys=6.03%, ctx=11634, majf=0, minf=1 00:11:30.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.238 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.238 issued rwts: total=11625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.238 00:11:30.238 Run status group 0 (all jobs): 00:11:30.238 READ: bw=52.7MiB/s (55.3MB/s), 13.0MiB/s-15.5MiB/s (13.6MB/s-16.3MB/s), io=196MiB (206MB), run=2970-3722msec 00:11:30.238 00:11:30.238 Disk stats (read/write): 00:11:30.238 nvme0n1: ios=13240/0, merge=0/0, ticks=3008/0, in_queue=3008, util=95.39% 00:11:30.238 nvme0n2: ios=13415/0, merge=0/0, ticks=3318/0, in_queue=3318, util=95.05% 00:11:30.238 nvme0n3: ios=10412/0, merge=0/0, ticks=2919/0, in_queue=2919, util=96.18% 00:11:30.238 nvme0n4: ios=11330/0, merge=0/0, ticks=2690/0, in_queue=2690, util=96.80% 00:11:30.495 19:27:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:30.495 19:27:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:30.753 19:27:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:30.753 19:27:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:31.011 19:27:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:31.011 19:27:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:31.270 19:27:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:31.270 19:27:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:31.529 19:27:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:31.529 19:27:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 77310 00:11:31.529 19:27:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:31.529 19:27:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.529 19:27:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:31.529 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:31.529 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:31.529 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.529 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:31.529 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.529 nvmf hotplug test: fio failed as expected 00:11:31.529 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:31.529 19:27:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:31.529 19:27:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:31.529 19:27:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.786 19:27:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:31.787 19:27:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:31.787 19:27:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:31.787 19:27:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:31.787 19:27:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:31.787 19:27:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:31.787 19:27:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:31.787 19:27:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:31.787 19:27:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:31.787 19:27:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:31.787 19:27:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:31.787 rmmod nvme_tcp 00:11:31.787 rmmod nvme_fabrics 00:11:32.045 rmmod nvme_keyring 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 76822 ']' 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 76822 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 76822 ']' 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 76822 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76822 00:11:32.045 killing process with pid 76822 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76822' 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 76822 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 76822 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.045 19:27:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:32.303 00:11:32.303 real 0m19.437s 00:11:32.303 user 1m15.189s 00:11:32.303 sys 0m8.581s 00:11:32.303 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:32.303 19:27:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.303 ************************************ 00:11:32.303 END TEST nvmf_fio_target 00:11:32.303 ************************************ 00:11:32.303 19:27:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:32.303 19:27:21 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:32.303 19:27:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:32.303 19:27:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.303 19:27:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:32.303 ************************************ 00:11:32.303 START TEST nvmf_bdevio 00:11:32.303 ************************************ 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:32.303 * Looking for test storage... 00:11:32.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.303 19:27:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:32.303 Cannot find device "nvmf_tgt_br" 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:32.303 Cannot find device "nvmf_tgt_br2" 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:32.303 Cannot find device "nvmf_tgt_br" 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:32.303 Cannot find device "nvmf_tgt_br2" 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:11:32.303 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:32.561 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:32.561 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:32.562 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:32.562 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:32.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:11:32.562 00:11:32.562 --- 10.0.0.2 ping statistics --- 00:11:32.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.562 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:32.562 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:32.562 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:11:32.562 00:11:32.562 --- 10.0.0.3 ping statistics --- 00:11:32.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.562 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:32.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:11:32.562 00:11:32.562 --- 10.0.0.1 ping statistics --- 00:11:32.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.562 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:32.562 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:32.820 19:27:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:32.820 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:32.820 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:32.820 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.820 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=77681 00:11:32.820 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:32.820 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 77681 00:11:32.820 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 77681 ']' 00:11:32.820 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.820 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:32.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.820 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.820 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:32.820 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.820 [2024-07-15 19:27:22.420341] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:11:32.820 [2024-07-15 19:27:22.420446] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.820 [2024-07-15 19:27:22.556554] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.820 [2024-07-15 19:27:22.618511] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.820 [2024-07-15 19:27:22.618561] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.820 [2024-07-15 19:27:22.618572] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.820 [2024-07-15 19:27:22.618581] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.820 [2024-07-15 19:27:22.618588] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.820 [2024-07-15 19:27:22.618740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:32.820 [2024-07-15 19:27:22.619799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:32.820 [2024-07-15 19:27:22.619909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:32.820 [2024-07-15 19:27:22.619916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:33.078 [2024-07-15 19:27:22.745227] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:33.078 Malloc0 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:33.078 [2024-07-15 19:27:22.807890] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:33.078 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:33.078 { 00:11:33.078 "params": { 00:11:33.078 "name": "Nvme$subsystem", 00:11:33.078 "trtype": "$TEST_TRANSPORT", 00:11:33.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:33.078 "adrfam": "ipv4", 00:11:33.078 "trsvcid": "$NVMF_PORT", 00:11:33.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:33.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:33.078 "hdgst": ${hdgst:-false}, 00:11:33.078 "ddgst": ${ddgst:-false} 00:11:33.078 }, 00:11:33.079 "method": "bdev_nvme_attach_controller" 00:11:33.079 } 00:11:33.079 EOF 00:11:33.079 )") 00:11:33.079 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:33.079 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:33.079 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:33.079 19:27:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:33.079 "params": { 00:11:33.079 "name": "Nvme1", 00:11:33.079 "trtype": "tcp", 00:11:33.079 "traddr": "10.0.0.2", 00:11:33.079 "adrfam": "ipv4", 00:11:33.079 "trsvcid": "4420", 00:11:33.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:33.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:33.079 "hdgst": false, 00:11:33.079 "ddgst": false 00:11:33.079 }, 00:11:33.079 "method": "bdev_nvme_attach_controller" 00:11:33.079 }' 00:11:33.079 [2024-07-15 19:27:22.860298] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:11:33.079 [2024-07-15 19:27:22.860394] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77716 ] 00:11:33.336 [2024-07-15 19:27:22.997182] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:33.336 [2024-07-15 19:27:23.056221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.336 [2024-07-15 19:27:23.056337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.336 [2024-07-15 19:27:23.056341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.593 I/O targets: 00:11:33.593 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:33.593 00:11:33.593 00:11:33.593 CUnit - A unit testing framework for C - Version 2.1-3 00:11:33.593 http://cunit.sourceforge.net/ 00:11:33.593 00:11:33.593 00:11:33.593 Suite: bdevio tests on: Nvme1n1 00:11:33.593 Test: blockdev write read block ...passed 00:11:33.593 Test: blockdev write zeroes read block ...passed 00:11:33.593 Test: blockdev write zeroes read no split ...passed 00:11:33.593 Test: blockdev write zeroes read split ...passed 00:11:33.593 Test: blockdev write zeroes read split partial ...passed 00:11:33.593 Test: blockdev reset ...[2024-07-15 19:27:23.310436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:33.593 [2024-07-15 19:27:23.310546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe25320 (9): Bad file descriptor 00:11:33.593 [2024-07-15 19:27:23.324193] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:33.593 passed 00:11:33.593 Test: blockdev write read 8 blocks ...passed 00:11:33.593 Test: blockdev write read size > 128k ...passed 00:11:33.593 Test: blockdev write read invalid size ...passed 00:11:33.593 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:33.593 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:33.593 Test: blockdev write read max offset ...passed 00:11:33.850 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:33.850 Test: blockdev writev readv 8 blocks ...passed 00:11:33.850 Test: blockdev writev readv 30 x 1block ...passed 00:11:33.850 Test: blockdev writev readv block ...passed 00:11:33.850 Test: blockdev writev readv size > 128k ...passed 00:11:33.850 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:33.850 Test: blockdev comparev and writev ...[2024-07-15 19:27:23.497866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.850 [2024-07-15 19:27:23.497916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:33.850 [2024-07-15 19:27:23.497938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.850 [2024-07-15 19:27:23.497949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:33.850 [2024-07-15 19:27:23.498403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.850 [2024-07-15 19:27:23.498435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:33.850 [2024-07-15 19:27:23.498454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.850 [2024-07-15 19:27:23.498465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:33.850 [2024-07-15 19:27:23.498885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.850 [2024-07-15 19:27:23.498913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:33.850 [2024-07-15 19:27:23.498932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.850 [2024-07-15 19:27:23.498942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:33.850 [2024-07-15 19:27:23.499377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.850 [2024-07-15 19:27:23.499406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:33.850 [2024-07-15 19:27:23.499426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.850 [2024-07-15 19:27:23.499436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:33.850 passed 00:11:33.850 Test: blockdev nvme passthru rw ...passed 00:11:33.850 Test: blockdev nvme passthru vendor specific ...[2024-07-15 19:27:23.583746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:33.850 [2024-07-15 19:27:23.583784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:33.850 [2024-07-15 19:27:23.583945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:33.850 [2024-07-15 19:27:23.583979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:33.850 [2024-07-15 19:27:23.584124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:33.850 [2024-07-15 19:27:23.584154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:33.850 [2024-07-15 19:27:23.584302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:33.850 [2024-07-15 19:27:23.584328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:33.850 passed 00:11:33.850 Test: blockdev nvme admin passthru ...passed 00:11:33.850 Test: blockdev copy ...passed 00:11:33.850 00:11:33.850 Run Summary: Type Total Ran Passed Failed Inactive 00:11:33.850 suites 1 1 n/a 0 0 00:11:33.850 tests 23 23 23 0 0 00:11:33.850 asserts 152 152 152 0 n/a 00:11:33.850 00:11:33.850 Elapsed time = 0.895 seconds 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:34.108 rmmod nvme_tcp 00:11:34.108 rmmod nvme_fabrics 00:11:34.108 rmmod nvme_keyring 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 77681 ']' 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 77681 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 77681 ']' 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 77681 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77681 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:11:34.108 killing process with pid 77681 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77681' 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 77681 00:11:34.108 19:27:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 77681 00:11:34.366 19:27:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:34.366 19:27:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:34.366 19:27:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:34.366 19:27:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:34.366 19:27:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:34.366 19:27:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.366 19:27:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:34.366 19:27:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.366 19:27:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:34.366 00:11:34.366 real 0m2.225s 00:11:34.366 user 0m7.475s 00:11:34.366 sys 0m0.613s 00:11:34.366 19:27:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:34.366 19:27:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:34.366 ************************************ 00:11:34.366 END TEST nvmf_bdevio 00:11:34.366 ************************************ 00:11:34.624 19:27:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:34.624 19:27:24 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:34.624 19:27:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:34.624 19:27:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.624 19:27:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:34.624 ************************************ 00:11:34.624 START TEST nvmf_auth_target 00:11:34.624 ************************************ 00:11:34.624 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:34.624 * Looking for test storage... 00:11:34.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:34.624 19:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:34.624 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:34.624 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.624 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.624 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.624 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.624 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.624 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.624 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.624 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.624 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.624 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.624 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:11:34.624 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:11:34.624 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.624 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.624 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:34.624 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:34.625 Cannot find device "nvmf_tgt_br" 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:34.625 Cannot find device "nvmf_tgt_br2" 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:34.625 Cannot find device "nvmf_tgt_br" 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:34.625 Cannot find device "nvmf_tgt_br2" 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:34.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:34.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:34.625 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:34.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:11:34.883 00:11:34.883 --- 10.0.0.2 ping statistics --- 00:11:34.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.883 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:34.883 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:34.883 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:11:34.883 00:11:34.883 --- 10.0.0.3 ping statistics --- 00:11:34.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.883 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:34.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:34.883 00:11:34.883 --- 10.0.0.1 ping statistics --- 00:11:34.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.883 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=77900 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 77900 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77900 ']' 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:34.883 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=77931 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=35235ab696bd88b19de6bc69bc912e53f962c5e13888300e 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.g6k 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 35235ab696bd88b19de6bc69bc912e53f962c5e13888300e 0 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 35235ab696bd88b19de6bc69bc912e53f962c5e13888300e 0 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=35235ab696bd88b19de6bc69bc912e53f962c5e13888300e 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.g6k 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.g6k 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.g6k 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6702c480153dada74f0e81ef7a82c1cc09ba267864def506713430199ef086f3 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.9qB 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6702c480153dada74f0e81ef7a82c1cc09ba267864def506713430199ef086f3 3 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6702c480153dada74f0e81ef7a82c1cc09ba267864def506713430199ef086f3 3 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6702c480153dada74f0e81ef7a82c1cc09ba267864def506713430199ef086f3 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.9qB 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.9qB 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.9qB 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=48bd0a3afc16bf3117a9484e44648446 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.fZg 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 48bd0a3afc16bf3117a9484e44648446 1 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 48bd0a3afc16bf3117a9484e44648446 1 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=48bd0a3afc16bf3117a9484e44648446 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.fZg 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.fZg 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.fZg 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:11:35.451 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:35.452 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:35.452 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:35.452 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:11:35.452 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:35.452 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:35.452 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1850e50cbe75b62d09818f578e2acc4d2ac9c5a2e2b5ca93 00:11:35.452 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:11:35.452 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.uBv 00:11:35.452 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1850e50cbe75b62d09818f578e2acc4d2ac9c5a2e2b5ca93 2 00:11:35.452 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1850e50cbe75b62d09818f578e2acc4d2ac9c5a2e2b5ca93 2 00:11:35.452 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:35.452 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:35.452 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1850e50cbe75b62d09818f578e2acc4d2ac9c5a2e2b5ca93 00:11:35.452 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:11:35.452 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.uBv 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.uBv 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.uBv 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7436a1dc851e91e2bdfb825942d36c1160526c06f664d69d 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.neV 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7436a1dc851e91e2bdfb825942d36c1160526c06f664d69d 2 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7436a1dc851e91e2bdfb825942d36c1160526c06f664d69d 2 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7436a1dc851e91e2bdfb825942d36c1160526c06f664d69d 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.neV 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.neV 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.neV 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=87ce38fa639a09b720b3ecda4f61c001 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.hxB 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 87ce38fa639a09b720b3ecda4f61c001 1 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 87ce38fa639a09b720b3ecda4f61c001 1 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=87ce38fa639a09b720b3ecda4f61c001 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.hxB 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.hxB 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.hxB 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=295bb3909c550c21173ddb8632f8ff3b0d3e1f4672662a89f5ce8dadd47bb434 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Xni 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 295bb3909c550c21173ddb8632f8ff3b0d3e1f4672662a89f5ce8dadd47bb434 3 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 295bb3909c550c21173ddb8632f8ff3b0d3e1f4672662a89f5ce8dadd47bb434 3 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=295bb3909c550c21173ddb8632f8ff3b0d3e1f4672662a89f5ce8dadd47bb434 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Xni 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Xni 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Xni 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 77900 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77900 ']' 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:35.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:35.711 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.314 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:36.314 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:36.314 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 77931 /var/tmp/host.sock 00:11:36.314 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77931 ']' 00:11:36.314 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:36.314 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:36.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:36.314 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:36.314 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:36.314 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.314 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:36.314 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:36.314 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:11:36.314 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.314 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.314 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.314 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:36.314 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.g6k 00:11:36.314 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.314 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.314 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.314 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.g6k 00:11:36.314 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.g6k 00:11:36.879 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.9qB ]] 00:11:36.879 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9qB 00:11:36.879 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.879 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.879 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.879 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9qB 00:11:36.879 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9qB 00:11:36.879 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:36.879 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.fZg 00:11:36.879 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.879 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.879 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.879 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.fZg 00:11:36.879 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.fZg 00:11:37.137 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.uBv ]] 00:11:37.137 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uBv 00:11:37.137 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.137 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.137 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.137 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uBv 00:11:37.137 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uBv 00:11:37.396 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:37.396 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.neV 00:11:37.396 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.396 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.396 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.396 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.neV 00:11:37.396 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.neV 00:11:37.654 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.hxB ]] 00:11:37.654 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hxB 00:11:37.654 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.654 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.654 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.654 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hxB 00:11:37.654 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hxB 00:11:38.221 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:38.221 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Xni 00:11:38.221 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.221 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.221 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.221 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Xni 00:11:38.221 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Xni 00:11:38.486 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:11:38.486 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:38.486 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:38.486 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:38.486 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:38.486 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:38.744 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:11:38.744 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.744 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:38.744 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:38.745 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:38.745 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.745 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.745 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.745 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.745 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.745 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.745 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.003 00:11:39.003 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:39.003 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:39.003 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.262 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.262 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.262 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.262 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.262 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.262 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.262 { 00:11:39.262 "auth": { 00:11:39.262 "dhgroup": "null", 00:11:39.262 "digest": "sha256", 00:11:39.262 "state": "completed" 00:11:39.262 }, 00:11:39.262 "cntlid": 1, 00:11:39.262 "listen_address": { 00:11:39.262 "adrfam": "IPv4", 00:11:39.262 "traddr": "10.0.0.2", 00:11:39.262 "trsvcid": "4420", 00:11:39.262 "trtype": "TCP" 00:11:39.262 }, 00:11:39.262 "peer_address": { 00:11:39.262 "adrfam": "IPv4", 00:11:39.262 "traddr": "10.0.0.1", 00:11:39.262 "trsvcid": "43862", 00:11:39.262 "trtype": "TCP" 00:11:39.262 }, 00:11:39.262 "qid": 0, 00:11:39.262 "state": "enabled", 00:11:39.262 "thread": "nvmf_tgt_poll_group_000" 00:11:39.262 } 00:11:39.262 ]' 00:11:39.262 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.262 19:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:39.262 19:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.262 19:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:39.262 19:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.520 19:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.520 19:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.520 19:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.779 19:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:00:MzUyMzVhYjY5NmJkODhiMTlkZTZiYzY5YmM5MTJlNTNmOTYyYzVlMTM4ODgzMDBl0qszYA==: --dhchap-ctrl-secret DHHC-1:03:NjcwMmM0ODAxNTNkYWRhNzRmMGU4MWVmN2E4MmMxY2MwOWJhMjY3ODY0ZGVmNTA2NzEzNDMwMTk5ZWYwODZmMxfCLFY=: 00:11:45.071 19:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.071 19:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:11:45.071 19:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.071 19:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.071 19:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.071 19:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:45.071 19:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:45.071 19:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.071 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.071 { 00:11:45.071 "auth": { 00:11:45.071 "dhgroup": "null", 00:11:45.071 "digest": "sha256", 00:11:45.071 "state": "completed" 00:11:45.071 }, 00:11:45.071 "cntlid": 3, 00:11:45.071 "listen_address": { 00:11:45.071 "adrfam": "IPv4", 00:11:45.071 "traddr": "10.0.0.2", 00:11:45.071 "trsvcid": "4420", 00:11:45.071 "trtype": "TCP" 00:11:45.071 }, 00:11:45.071 "peer_address": { 00:11:45.071 "adrfam": "IPv4", 00:11:45.071 "traddr": "10.0.0.1", 00:11:45.071 "trsvcid": "43878", 00:11:45.071 "trtype": "TCP" 00:11:45.071 }, 00:11:45.071 "qid": 0, 00:11:45.071 "state": "enabled", 00:11:45.071 "thread": "nvmf_tgt_poll_group_000" 00:11:45.071 } 00:11:45.071 ]' 00:11:45.071 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.329 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:45.330 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.330 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:45.330 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.330 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.330 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.330 19:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.588 19:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:01:NDhiZDBhM2FmYzE2YmYzMTE3YTk0ODRlNDQ2NDg0NDa9Ifhg: --dhchap-ctrl-secret DHHC-1:02:MTg1MGU1MGNiZTc1YjYyZDA5ODE4ZjU3OGUyYWNjNGQyYWM5YzVhMmUyYjVjYTkzUFeZWg==: 00:11:46.522 19:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.522 19:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:11:46.522 19:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.522 19:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.522 19:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.522 19:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.522 19:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:46.522 19:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:46.522 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:11:46.522 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:46.522 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:46.522 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:46.522 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:46.522 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.522 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.522 19:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.522 19:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.522 19:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.522 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.522 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.780 00:11:46.780 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:46.780 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.780 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.346 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.346 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.346 19:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.346 19:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.346 19:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.346 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.346 { 00:11:47.346 "auth": { 00:11:47.346 "dhgroup": "null", 00:11:47.346 "digest": "sha256", 00:11:47.346 "state": "completed" 00:11:47.346 }, 00:11:47.346 "cntlid": 5, 00:11:47.346 "listen_address": { 00:11:47.346 "adrfam": "IPv4", 00:11:47.346 "traddr": "10.0.0.2", 00:11:47.346 "trsvcid": "4420", 00:11:47.346 "trtype": "TCP" 00:11:47.346 }, 00:11:47.346 "peer_address": { 00:11:47.346 "adrfam": "IPv4", 00:11:47.346 "traddr": "10.0.0.1", 00:11:47.346 "trsvcid": "33858", 00:11:47.346 "trtype": "TCP" 00:11:47.346 }, 00:11:47.346 "qid": 0, 00:11:47.346 "state": "enabled", 00:11:47.346 "thread": "nvmf_tgt_poll_group_000" 00:11:47.346 } 00:11:47.346 ]' 00:11:47.346 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.346 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:47.346 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.346 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:47.346 19:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.346 19:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.346 19:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.346 19:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.604 19:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:02:NzQzNmExZGM4NTFlOTFlMmJkZmI4MjU5NDJkMzZjMTE2MDUyNmMwNmY2NjRkNjlkmucOkA==: --dhchap-ctrl-secret DHHC-1:01:ODdjZTM4ZmE2MzlhMDliNzIwYjNlY2RhNGY2MWMwMDH5v6We: 00:11:48.171 19:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.171 19:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:11:48.171 19:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.171 19:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.171 19:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.171 19:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:48.171 19:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:48.171 19:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:48.430 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:11:48.430 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.430 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:48.430 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:48.430 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:48.430 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.430 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:11:48.430 19:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.430 19:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.689 19:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.689 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:48.689 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:48.947 00:11:48.947 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:48.947 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.947 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.206 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.206 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.206 19:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.206 19:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.206 19:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.206 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:49.206 { 00:11:49.206 "auth": { 00:11:49.206 "dhgroup": "null", 00:11:49.206 "digest": "sha256", 00:11:49.206 "state": "completed" 00:11:49.206 }, 00:11:49.206 "cntlid": 7, 00:11:49.206 "listen_address": { 00:11:49.206 "adrfam": "IPv4", 00:11:49.206 "traddr": "10.0.0.2", 00:11:49.206 "trsvcid": "4420", 00:11:49.206 "trtype": "TCP" 00:11:49.206 }, 00:11:49.206 "peer_address": { 00:11:49.206 "adrfam": "IPv4", 00:11:49.206 "traddr": "10.0.0.1", 00:11:49.206 "trsvcid": "33870", 00:11:49.206 "trtype": "TCP" 00:11:49.206 }, 00:11:49.206 "qid": 0, 00:11:49.206 "state": "enabled", 00:11:49.206 "thread": "nvmf_tgt_poll_group_000" 00:11:49.206 } 00:11:49.206 ]' 00:11:49.206 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.206 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:49.206 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.206 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:49.206 19:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.465 19:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.465 19:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.465 19:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.724 19:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:03:Mjk1YmIzOTA5YzU1MGMyMTE3M2RkYjg2MzJmOGZmM2IwZDNlMWY0NjcyNjYyYTg5ZjVjZThkYWRkNDdiYjQzNHvEqS8=: 00:11:50.289 19:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.289 19:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:11:50.290 19:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.290 19:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.290 19:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.290 19:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:50.290 19:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.290 19:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:50.290 19:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:50.548 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:11:50.548 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.548 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:50.548 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:50.548 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:50.548 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.548 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.548 19:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.548 19:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.548 19:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.548 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.548 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.806 00:11:50.806 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:50.806 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:50.806 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.064 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.064 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.064 19:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.064 19:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.322 19:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.322 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.322 { 00:11:51.322 "auth": { 00:11:51.322 "dhgroup": "ffdhe2048", 00:11:51.322 "digest": "sha256", 00:11:51.322 "state": "completed" 00:11:51.322 }, 00:11:51.322 "cntlid": 9, 00:11:51.322 "listen_address": { 00:11:51.322 "adrfam": "IPv4", 00:11:51.322 "traddr": "10.0.0.2", 00:11:51.322 "trsvcid": "4420", 00:11:51.322 "trtype": "TCP" 00:11:51.322 }, 00:11:51.322 "peer_address": { 00:11:51.322 "adrfam": "IPv4", 00:11:51.322 "traddr": "10.0.0.1", 00:11:51.322 "trsvcid": "33890", 00:11:51.322 "trtype": "TCP" 00:11:51.322 }, 00:11:51.322 "qid": 0, 00:11:51.322 "state": "enabled", 00:11:51.322 "thread": "nvmf_tgt_poll_group_000" 00:11:51.322 } 00:11:51.322 ]' 00:11:51.322 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.322 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:51.322 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.322 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:51.322 19:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.322 19:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.323 19:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.323 19:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.580 19:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:00:MzUyMzVhYjY5NmJkODhiMTlkZTZiYzY5YmM5MTJlNTNmOTYyYzVlMTM4ODgzMDBl0qszYA==: --dhchap-ctrl-secret DHHC-1:03:NjcwMmM0ODAxNTNkYWRhNzRmMGU4MWVmN2E4MmMxY2MwOWJhMjY3ODY0ZGVmNTA2NzEzNDMwMTk5ZWYwODZmMxfCLFY=: 00:11:52.209 19:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.209 19:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:11:52.209 19:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.209 19:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.209 19:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.209 19:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:52.209 19:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:52.209 19:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:52.467 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:11:52.467 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:52.467 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:52.467 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:52.467 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:52.467 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.467 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.467 19:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.467 19:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.467 19:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.467 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.467 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.725 00:11:52.984 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:52.984 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.984 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.242 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.242 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.242 19:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.242 19:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.242 19:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.242 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.242 { 00:11:53.242 "auth": { 00:11:53.242 "dhgroup": "ffdhe2048", 00:11:53.242 "digest": "sha256", 00:11:53.242 "state": "completed" 00:11:53.242 }, 00:11:53.242 "cntlid": 11, 00:11:53.242 "listen_address": { 00:11:53.242 "adrfam": "IPv4", 00:11:53.242 "traddr": "10.0.0.2", 00:11:53.242 "trsvcid": "4420", 00:11:53.242 "trtype": "TCP" 00:11:53.242 }, 00:11:53.242 "peer_address": { 00:11:53.242 "adrfam": "IPv4", 00:11:53.242 "traddr": "10.0.0.1", 00:11:53.242 "trsvcid": "33930", 00:11:53.242 "trtype": "TCP" 00:11:53.242 }, 00:11:53.242 "qid": 0, 00:11:53.242 "state": "enabled", 00:11:53.242 "thread": "nvmf_tgt_poll_group_000" 00:11:53.242 } 00:11:53.242 ]' 00:11:53.242 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.242 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:53.242 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.242 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:53.242 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.242 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.242 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.242 19:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.501 19:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:01:NDhiZDBhM2FmYzE2YmYzMTE3YTk0ODRlNDQ2NDg0NDa9Ifhg: --dhchap-ctrl-secret DHHC-1:02:MTg1MGU1MGNiZTc1YjYyZDA5ODE4ZjU3OGUyYWNjNGQyYWM5YzVhMmUyYjVjYTkzUFeZWg==: 00:11:54.433 19:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.433 19:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:11:54.433 19:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.433 19:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.433 19:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.433 19:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.433 19:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:54.434 19:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:54.434 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:11:54.434 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.434 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:54.434 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:54.434 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:54.434 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.434 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.434 19:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.434 19:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.434 19:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.434 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.434 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.007 00:11:55.007 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:55.007 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.007 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.007 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.007 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.007 19:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.008 19:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.008 19:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.008 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.008 { 00:11:55.008 "auth": { 00:11:55.008 "dhgroup": "ffdhe2048", 00:11:55.008 "digest": "sha256", 00:11:55.008 "state": "completed" 00:11:55.008 }, 00:11:55.008 "cntlid": 13, 00:11:55.008 "listen_address": { 00:11:55.008 "adrfam": "IPv4", 00:11:55.008 "traddr": "10.0.0.2", 00:11:55.008 "trsvcid": "4420", 00:11:55.008 "trtype": "TCP" 00:11:55.008 }, 00:11:55.008 "peer_address": { 00:11:55.008 "adrfam": "IPv4", 00:11:55.008 "traddr": "10.0.0.1", 00:11:55.008 "trsvcid": "33948", 00:11:55.008 "trtype": "TCP" 00:11:55.008 }, 00:11:55.008 "qid": 0, 00:11:55.008 "state": "enabled", 00:11:55.008 "thread": "nvmf_tgt_poll_group_000" 00:11:55.008 } 00:11:55.008 ]' 00:11:55.008 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.267 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:55.267 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.267 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:55.267 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.267 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.267 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.267 19:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.526 19:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:02:NzQzNmExZGM4NTFlOTFlMmJkZmI4MjU5NDJkMzZjMTE2MDUyNmMwNmY2NjRkNjlkmucOkA==: --dhchap-ctrl-secret DHHC-1:01:ODdjZTM4ZmE2MzlhMDliNzIwYjNlY2RhNGY2MWMwMDH5v6We: 00:11:56.462 19:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.462 19:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:11:56.462 19:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.462 19:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.462 19:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.462 19:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.462 19:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:56.462 19:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:56.462 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:11:56.462 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:56.462 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:56.462 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:56.462 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:56.462 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.462 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:11:56.462 19:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.462 19:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.462 19:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.462 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:56.462 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:56.721 00:11:56.979 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:56.979 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:56.979 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.237 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.237 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.237 19:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.237 19:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.237 19:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.237 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.237 { 00:11:57.237 "auth": { 00:11:57.237 "dhgroup": "ffdhe2048", 00:11:57.237 "digest": "sha256", 00:11:57.237 "state": "completed" 00:11:57.237 }, 00:11:57.237 "cntlid": 15, 00:11:57.237 "listen_address": { 00:11:57.237 "adrfam": "IPv4", 00:11:57.238 "traddr": "10.0.0.2", 00:11:57.238 "trsvcid": "4420", 00:11:57.238 "trtype": "TCP" 00:11:57.238 }, 00:11:57.238 "peer_address": { 00:11:57.238 "adrfam": "IPv4", 00:11:57.238 "traddr": "10.0.0.1", 00:11:57.238 "trsvcid": "60332", 00:11:57.238 "trtype": "TCP" 00:11:57.238 }, 00:11:57.238 "qid": 0, 00:11:57.238 "state": "enabled", 00:11:57.238 "thread": "nvmf_tgt_poll_group_000" 00:11:57.238 } 00:11:57.238 ]' 00:11:57.238 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.238 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:57.238 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.238 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:57.238 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.238 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.238 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.238 19:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.496 19:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:03:Mjk1YmIzOTA5YzU1MGMyMTE3M2RkYjg2MzJmOGZmM2IwZDNlMWY0NjcyNjYyYTg5ZjVjZThkYWRkNDdiYjQzNHvEqS8=: 00:11:58.431 19:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.431 19:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:11:58.431 19:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.431 19:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.431 19:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.431 19:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:58.431 19:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.431 19:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:58.431 19:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:58.690 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:11:58.690 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.690 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:58.690 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:58.690 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:58.690 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.691 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.691 19:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.691 19:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.691 19:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.691 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.691 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.950 00:11:58.950 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:58.950 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:58.950 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.209 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.209 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.209 19:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.209 19:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.209 19:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.209 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.209 { 00:11:59.209 "auth": { 00:11:59.209 "dhgroup": "ffdhe3072", 00:11:59.209 "digest": "sha256", 00:11:59.209 "state": "completed" 00:11:59.209 }, 00:11:59.209 "cntlid": 17, 00:11:59.209 "listen_address": { 00:11:59.209 "adrfam": "IPv4", 00:11:59.209 "traddr": "10.0.0.2", 00:11:59.209 "trsvcid": "4420", 00:11:59.209 "trtype": "TCP" 00:11:59.209 }, 00:11:59.209 "peer_address": { 00:11:59.209 "adrfam": "IPv4", 00:11:59.209 "traddr": "10.0.0.1", 00:11:59.209 "trsvcid": "60344", 00:11:59.209 "trtype": "TCP" 00:11:59.209 }, 00:11:59.209 "qid": 0, 00:11:59.209 "state": "enabled", 00:11:59.209 "thread": "nvmf_tgt_poll_group_000" 00:11:59.209 } 00:11:59.209 ]' 00:11:59.209 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.209 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:59.209 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.209 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:59.209 19:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.468 19:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.468 19:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.468 19:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.727 19:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:00:MzUyMzVhYjY5NmJkODhiMTlkZTZiYzY5YmM5MTJlNTNmOTYyYzVlMTM4ODgzMDBl0qszYA==: --dhchap-ctrl-secret DHHC-1:03:NjcwMmM0ODAxNTNkYWRhNzRmMGU4MWVmN2E4MmMxY2MwOWJhMjY3ODY0ZGVmNTA2NzEzNDMwMTk5ZWYwODZmMxfCLFY=: 00:12:00.294 19:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.294 19:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:00.294 19:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.294 19:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.294 19:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.294 19:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.294 19:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:00.294 19:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:00.552 19:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:12:00.552 19:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:00.552 19:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:00.552 19:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:00.552 19:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:00.552 19:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.552 19:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.552 19:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.552 19:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.552 19:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.552 19:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.552 19:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.120 00:12:01.120 19:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:01.120 19:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:01.120 19:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.379 19:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.379 19:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.379 19:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.379 19:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.379 19:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.379 19:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.379 { 00:12:01.379 "auth": { 00:12:01.379 "dhgroup": "ffdhe3072", 00:12:01.379 "digest": "sha256", 00:12:01.379 "state": "completed" 00:12:01.379 }, 00:12:01.379 "cntlid": 19, 00:12:01.379 "listen_address": { 00:12:01.379 "adrfam": "IPv4", 00:12:01.379 "traddr": "10.0.0.2", 00:12:01.379 "trsvcid": "4420", 00:12:01.379 "trtype": "TCP" 00:12:01.379 }, 00:12:01.379 "peer_address": { 00:12:01.379 "adrfam": "IPv4", 00:12:01.379 "traddr": "10.0.0.1", 00:12:01.379 "trsvcid": "60374", 00:12:01.379 "trtype": "TCP" 00:12:01.379 }, 00:12:01.379 "qid": 0, 00:12:01.379 "state": "enabled", 00:12:01.379 "thread": "nvmf_tgt_poll_group_000" 00:12:01.379 } 00:12:01.379 ]' 00:12:01.379 19:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.379 19:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:01.379 19:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.379 19:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:01.379 19:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:01.379 19:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.379 19:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.379 19:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.637 19:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:01:NDhiZDBhM2FmYzE2YmYzMTE3YTk0ODRlNDQ2NDg0NDa9Ifhg: --dhchap-ctrl-secret DHHC-1:02:MTg1MGU1MGNiZTc1YjYyZDA5ODE4ZjU3OGUyYWNjNGQyYWM5YzVhMmUyYjVjYTkzUFeZWg==: 00:12:02.572 19:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.572 19:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:02.572 19:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.572 19:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.572 19:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.572 19:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:02.572 19:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:02.572 19:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:02.831 19:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:12:02.831 19:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:02.831 19:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:02.831 19:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:02.831 19:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:02.831 19:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.831 19:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.831 19:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.831 19:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.831 19:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.831 19:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.831 19:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.089 00:12:03.347 19:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:03.347 19:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:03.347 19:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.616 19:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.616 19:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.616 19:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.616 19:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.616 19:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.616 19:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:03.616 { 00:12:03.616 "auth": { 00:12:03.616 "dhgroup": "ffdhe3072", 00:12:03.616 "digest": "sha256", 00:12:03.616 "state": "completed" 00:12:03.616 }, 00:12:03.616 "cntlid": 21, 00:12:03.616 "listen_address": { 00:12:03.616 "adrfam": "IPv4", 00:12:03.616 "traddr": "10.0.0.2", 00:12:03.616 "trsvcid": "4420", 00:12:03.616 "trtype": "TCP" 00:12:03.616 }, 00:12:03.616 "peer_address": { 00:12:03.616 "adrfam": "IPv4", 00:12:03.616 "traddr": "10.0.0.1", 00:12:03.616 "trsvcid": "60400", 00:12:03.616 "trtype": "TCP" 00:12:03.616 }, 00:12:03.616 "qid": 0, 00:12:03.616 "state": "enabled", 00:12:03.616 "thread": "nvmf_tgt_poll_group_000" 00:12:03.616 } 00:12:03.616 ]' 00:12:03.616 19:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:03.616 19:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:03.616 19:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.616 19:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:03.616 19:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:03.616 19:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.620 19:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.620 19:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.185 19:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:02:NzQzNmExZGM4NTFlOTFlMmJkZmI4MjU5NDJkMzZjMTE2MDUyNmMwNmY2NjRkNjlkmucOkA==: --dhchap-ctrl-secret DHHC-1:01:ODdjZTM4ZmE2MzlhMDliNzIwYjNlY2RhNGY2MWMwMDH5v6We: 00:12:04.752 19:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.752 19:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:04.752 19:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.752 19:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.752 19:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.752 19:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:04.752 19:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:04.752 19:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:05.011 19:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:12:05.011 19:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:05.011 19:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:05.011 19:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:05.011 19:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:05.011 19:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.012 19:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:12:05.012 19:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.012 19:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.012 19:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.012 19:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:05.012 19:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:05.579 00:12:05.579 19:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:05.579 19:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.579 19:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:05.838 19:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.838 19:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.838 19:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.838 19:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.838 19:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.838 19:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:05.838 { 00:12:05.838 "auth": { 00:12:05.838 "dhgroup": "ffdhe3072", 00:12:05.838 "digest": "sha256", 00:12:05.838 "state": "completed" 00:12:05.838 }, 00:12:05.838 "cntlid": 23, 00:12:05.838 "listen_address": { 00:12:05.838 "adrfam": "IPv4", 00:12:05.838 "traddr": "10.0.0.2", 00:12:05.838 "trsvcid": "4420", 00:12:05.838 "trtype": "TCP" 00:12:05.838 }, 00:12:05.838 "peer_address": { 00:12:05.838 "adrfam": "IPv4", 00:12:05.838 "traddr": "10.0.0.1", 00:12:05.838 "trsvcid": "60438", 00:12:05.838 "trtype": "TCP" 00:12:05.838 }, 00:12:05.838 "qid": 0, 00:12:05.838 "state": "enabled", 00:12:05.838 "thread": "nvmf_tgt_poll_group_000" 00:12:05.838 } 00:12:05.838 ]' 00:12:05.838 19:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:05.838 19:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:05.838 19:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:05.838 19:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:05.838 19:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:05.838 19:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.838 19:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.838 19:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.097 19:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:03:Mjk1YmIzOTA5YzU1MGMyMTE3M2RkYjg2MzJmOGZmM2IwZDNlMWY0NjcyNjYyYTg5ZjVjZThkYWRkNDdiYjQzNHvEqS8=: 00:12:07.034 19:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.034 19:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:07.034 19:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.034 19:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.034 19:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.034 19:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:07.034 19:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:07.034 19:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:07.034 19:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:07.293 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:12:07.293 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:07.293 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:07.293 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:07.293 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:07.293 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.293 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.293 19:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.293 19:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.293 19:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.293 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.293 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.859 00:12:07.859 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:07.859 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.859 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:08.118 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.118 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.118 19:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.118 19:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.118 19:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.118 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:08.118 { 00:12:08.118 "auth": { 00:12:08.118 "dhgroup": "ffdhe4096", 00:12:08.118 "digest": "sha256", 00:12:08.118 "state": "completed" 00:12:08.118 }, 00:12:08.118 "cntlid": 25, 00:12:08.118 "listen_address": { 00:12:08.118 "adrfam": "IPv4", 00:12:08.118 "traddr": "10.0.0.2", 00:12:08.118 "trsvcid": "4420", 00:12:08.118 "trtype": "TCP" 00:12:08.118 }, 00:12:08.118 "peer_address": { 00:12:08.118 "adrfam": "IPv4", 00:12:08.118 "traddr": "10.0.0.1", 00:12:08.118 "trsvcid": "48844", 00:12:08.118 "trtype": "TCP" 00:12:08.118 }, 00:12:08.118 "qid": 0, 00:12:08.118 "state": "enabled", 00:12:08.118 "thread": "nvmf_tgt_poll_group_000" 00:12:08.118 } 00:12:08.118 ]' 00:12:08.118 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:08.118 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:08.118 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:08.118 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:08.118 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:08.376 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.376 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.376 19:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.634 19:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:00:MzUyMzVhYjY5NmJkODhiMTlkZTZiYzY5YmM5MTJlNTNmOTYyYzVlMTM4ODgzMDBl0qszYA==: --dhchap-ctrl-secret DHHC-1:03:NjcwMmM0ODAxNTNkYWRhNzRmMGU4MWVmN2E4MmMxY2MwOWJhMjY3ODY0ZGVmNTA2NzEzNDMwMTk5ZWYwODZmMxfCLFY=: 00:12:09.570 19:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.570 19:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:09.570 19:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.570 19:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.570 19:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.570 19:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:09.570 19:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:09.570 19:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:09.570 19:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:12:09.570 19:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:09.570 19:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:09.570 19:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:09.570 19:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:09.570 19:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.570 19:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.570 19:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.570 19:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.829 19:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.829 19:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.829 19:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.087 00:12:10.087 19:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:10.087 19:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.087 19:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.345 19:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.345 19:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.345 19:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.345 19:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.345 19:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.345 19:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:10.345 { 00:12:10.345 "auth": { 00:12:10.345 "dhgroup": "ffdhe4096", 00:12:10.345 "digest": "sha256", 00:12:10.345 "state": "completed" 00:12:10.345 }, 00:12:10.345 "cntlid": 27, 00:12:10.345 "listen_address": { 00:12:10.345 "adrfam": "IPv4", 00:12:10.345 "traddr": "10.0.0.2", 00:12:10.345 "trsvcid": "4420", 00:12:10.345 "trtype": "TCP" 00:12:10.345 }, 00:12:10.345 "peer_address": { 00:12:10.345 "adrfam": "IPv4", 00:12:10.345 "traddr": "10.0.0.1", 00:12:10.345 "trsvcid": "48882", 00:12:10.345 "trtype": "TCP" 00:12:10.345 }, 00:12:10.345 "qid": 0, 00:12:10.345 "state": "enabled", 00:12:10.345 "thread": "nvmf_tgt_poll_group_000" 00:12:10.345 } 00:12:10.345 ]' 00:12:10.345 19:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:10.603 19:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:10.603 19:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:10.603 19:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:10.603 19:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:10.603 19:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.603 19:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.603 19:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.861 19:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:01:NDhiZDBhM2FmYzE2YmYzMTE3YTk0ODRlNDQ2NDg0NDa9Ifhg: --dhchap-ctrl-secret DHHC-1:02:MTg1MGU1MGNiZTc1YjYyZDA5ODE4ZjU3OGUyYWNjNGQyYWM5YzVhMmUyYjVjYTkzUFeZWg==: 00:12:11.796 19:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.796 19:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:11.796 19:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.796 19:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.796 19:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.796 19:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.796 19:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:11.796 19:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:12.053 19:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:12:12.053 19:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:12.053 19:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:12.053 19:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:12.053 19:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:12.053 19:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.053 19:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.053 19:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.053 19:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.053 19:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.053 19:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.053 19:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.310 00:12:12.310 19:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:12.310 19:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:12.310 19:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.569 19:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.569 19:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.569 19:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.569 19:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.569 19:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.569 19:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:12.569 { 00:12:12.569 "auth": { 00:12:12.569 "dhgroup": "ffdhe4096", 00:12:12.569 "digest": "sha256", 00:12:12.569 "state": "completed" 00:12:12.569 }, 00:12:12.569 "cntlid": 29, 00:12:12.569 "listen_address": { 00:12:12.569 "adrfam": "IPv4", 00:12:12.569 "traddr": "10.0.0.2", 00:12:12.569 "trsvcid": "4420", 00:12:12.569 "trtype": "TCP" 00:12:12.569 }, 00:12:12.569 "peer_address": { 00:12:12.569 "adrfam": "IPv4", 00:12:12.569 "traddr": "10.0.0.1", 00:12:12.569 "trsvcid": "48910", 00:12:12.569 "trtype": "TCP" 00:12:12.569 }, 00:12:12.569 "qid": 0, 00:12:12.569 "state": "enabled", 00:12:12.569 "thread": "nvmf_tgt_poll_group_000" 00:12:12.569 } 00:12:12.569 ]' 00:12:12.569 19:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:12.838 19:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:12.838 19:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:12.838 19:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:12.838 19:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:12.838 19:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.838 19:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.838 19:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.095 19:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:02:NzQzNmExZGM4NTFlOTFlMmJkZmI4MjU5NDJkMzZjMTE2MDUyNmMwNmY2NjRkNjlkmucOkA==: --dhchap-ctrl-secret DHHC-1:01:ODdjZTM4ZmE2MzlhMDliNzIwYjNlY2RhNGY2MWMwMDH5v6We: 00:12:13.659 19:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.917 19:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:13.917 19:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.917 19:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.917 19:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.917 19:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:13.917 19:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:13.917 19:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:14.175 19:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:12:14.175 19:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:14.175 19:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:14.175 19:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:14.175 19:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:14.175 19:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.175 19:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:12:14.175 19:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.175 19:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.175 19:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.175 19:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:14.175 19:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:14.433 00:12:14.433 19:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:14.433 19:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:14.433 19:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.999 19:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.999 19:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.999 19:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.999 19:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.999 19:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.999 19:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:14.999 { 00:12:14.999 "auth": { 00:12:14.999 "dhgroup": "ffdhe4096", 00:12:14.999 "digest": "sha256", 00:12:14.999 "state": "completed" 00:12:14.999 }, 00:12:14.999 "cntlid": 31, 00:12:14.999 "listen_address": { 00:12:14.999 "adrfam": "IPv4", 00:12:14.999 "traddr": "10.0.0.2", 00:12:14.999 "trsvcid": "4420", 00:12:14.999 "trtype": "TCP" 00:12:14.999 }, 00:12:14.999 "peer_address": { 00:12:14.999 "adrfam": "IPv4", 00:12:14.999 "traddr": "10.0.0.1", 00:12:14.999 "trsvcid": "48928", 00:12:14.999 "trtype": "TCP" 00:12:14.999 }, 00:12:14.999 "qid": 0, 00:12:14.999 "state": "enabled", 00:12:14.999 "thread": "nvmf_tgt_poll_group_000" 00:12:14.999 } 00:12:14.999 ]' 00:12:14.999 19:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:14.999 19:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:14.999 19:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:14.999 19:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:14.999 19:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:14.999 19:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.999 19:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.999 19:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.256 19:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:03:Mjk1YmIzOTA5YzU1MGMyMTE3M2RkYjg2MzJmOGZmM2IwZDNlMWY0NjcyNjYyYTg5ZjVjZThkYWRkNDdiYjQzNHvEqS8=: 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.190 19:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.757 00:12:16.757 19:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:16.757 19:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:16.757 19:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.015 19:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.015 19:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.015 19:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.015 19:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.015 19:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.015 19:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:17.015 { 00:12:17.015 "auth": { 00:12:17.015 "dhgroup": "ffdhe6144", 00:12:17.015 "digest": "sha256", 00:12:17.015 "state": "completed" 00:12:17.015 }, 00:12:17.015 "cntlid": 33, 00:12:17.015 "listen_address": { 00:12:17.015 "adrfam": "IPv4", 00:12:17.015 "traddr": "10.0.0.2", 00:12:17.015 "trsvcid": "4420", 00:12:17.015 "trtype": "TCP" 00:12:17.015 }, 00:12:17.015 "peer_address": { 00:12:17.015 "adrfam": "IPv4", 00:12:17.015 "traddr": "10.0.0.1", 00:12:17.015 "trsvcid": "48950", 00:12:17.015 "trtype": "TCP" 00:12:17.015 }, 00:12:17.015 "qid": 0, 00:12:17.015 "state": "enabled", 00:12:17.015 "thread": "nvmf_tgt_poll_group_000" 00:12:17.015 } 00:12:17.015 ]' 00:12:17.015 19:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:17.015 19:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:17.015 19:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:17.015 19:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:17.015 19:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:17.274 19:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.274 19:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.274 19:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.532 19:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:00:MzUyMzVhYjY5NmJkODhiMTlkZTZiYzY5YmM5MTJlNTNmOTYyYzVlMTM4ODgzMDBl0qszYA==: --dhchap-ctrl-secret DHHC-1:03:NjcwMmM0ODAxNTNkYWRhNzRmMGU4MWVmN2E4MmMxY2MwOWJhMjY3ODY0ZGVmNTA2NzEzNDMwMTk5ZWYwODZmMxfCLFY=: 00:12:18.098 19:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.356 19:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:18.356 19:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.356 19:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.356 19:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.356 19:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:18.356 19:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:18.356 19:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:18.356 19:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:12:18.615 19:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:18.615 19:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:18.615 19:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:18.615 19:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:18.615 19:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.615 19:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.615 19:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.615 19:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.615 19:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.615 19:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.615 19:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.880 00:12:18.880 19:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:18.880 19:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:18.880 19:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.138 19:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.138 19:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.138 19:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.138 19:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.396 19:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.396 19:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:19.396 { 00:12:19.396 "auth": { 00:12:19.396 "dhgroup": "ffdhe6144", 00:12:19.396 "digest": "sha256", 00:12:19.396 "state": "completed" 00:12:19.396 }, 00:12:19.396 "cntlid": 35, 00:12:19.396 "listen_address": { 00:12:19.396 "adrfam": "IPv4", 00:12:19.396 "traddr": "10.0.0.2", 00:12:19.396 "trsvcid": "4420", 00:12:19.396 "trtype": "TCP" 00:12:19.396 }, 00:12:19.396 "peer_address": { 00:12:19.396 "adrfam": "IPv4", 00:12:19.396 "traddr": "10.0.0.1", 00:12:19.396 "trsvcid": "35146", 00:12:19.396 "trtype": "TCP" 00:12:19.396 }, 00:12:19.396 "qid": 0, 00:12:19.396 "state": "enabled", 00:12:19.396 "thread": "nvmf_tgt_poll_group_000" 00:12:19.396 } 00:12:19.396 ]' 00:12:19.396 19:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:19.396 19:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:19.396 19:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:19.396 19:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:19.396 19:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:19.396 19:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.396 19:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.396 19:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.655 19:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:01:NDhiZDBhM2FmYzE2YmYzMTE3YTk0ODRlNDQ2NDg0NDa9Ifhg: --dhchap-ctrl-secret DHHC-1:02:MTg1MGU1MGNiZTc1YjYyZDA5ODE4ZjU3OGUyYWNjNGQyYWM5YzVhMmUyYjVjYTkzUFeZWg==: 00:12:20.369 19:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.369 19:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:20.369 19:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.369 19:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.369 19:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.369 19:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:20.369 19:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:20.369 19:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:20.650 19:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:12:20.650 19:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:20.650 19:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:20.650 19:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:20.650 19:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:20.650 19:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.650 19:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.650 19:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.650 19:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.650 19:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.650 19:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.650 19:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.215 00:12:21.215 19:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:21.215 19:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:21.215 19:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.474 19:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.474 19:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.474 19:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.474 19:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.474 19:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.474 19:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:21.474 { 00:12:21.474 "auth": { 00:12:21.474 "dhgroup": "ffdhe6144", 00:12:21.474 "digest": "sha256", 00:12:21.474 "state": "completed" 00:12:21.474 }, 00:12:21.474 "cntlid": 37, 00:12:21.474 "listen_address": { 00:12:21.474 "adrfam": "IPv4", 00:12:21.474 "traddr": "10.0.0.2", 00:12:21.474 "trsvcid": "4420", 00:12:21.474 "trtype": "TCP" 00:12:21.474 }, 00:12:21.474 "peer_address": { 00:12:21.474 "adrfam": "IPv4", 00:12:21.474 "traddr": "10.0.0.1", 00:12:21.474 "trsvcid": "35188", 00:12:21.474 "trtype": "TCP" 00:12:21.474 }, 00:12:21.474 "qid": 0, 00:12:21.474 "state": "enabled", 00:12:21.474 "thread": "nvmf_tgt_poll_group_000" 00:12:21.474 } 00:12:21.474 ]' 00:12:21.474 19:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:21.474 19:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:21.474 19:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:21.733 19:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:21.733 19:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:21.733 19:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.733 19:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.733 19:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.992 19:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:02:NzQzNmExZGM4NTFlOTFlMmJkZmI4MjU5NDJkMzZjMTE2MDUyNmMwNmY2NjRkNjlkmucOkA==: --dhchap-ctrl-secret DHHC-1:01:ODdjZTM4ZmE2MzlhMDliNzIwYjNlY2RhNGY2MWMwMDH5v6We: 00:12:22.559 19:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.559 19:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:22.559 19:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.559 19:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.559 19:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.559 19:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:22.559 19:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:22.559 19:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:22.818 19:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:12:22.818 19:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:22.818 19:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:22.818 19:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:22.818 19:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:22.818 19:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.818 19:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:12:22.818 19:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.818 19:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.077 19:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.077 19:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:23.077 19:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:23.335 00:12:23.335 19:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:23.335 19:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:23.335 19:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.593 19:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.593 19:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.593 19:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.593 19:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.593 19:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.593 19:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:23.593 { 00:12:23.593 "auth": { 00:12:23.593 "dhgroup": "ffdhe6144", 00:12:23.593 "digest": "sha256", 00:12:23.593 "state": "completed" 00:12:23.593 }, 00:12:23.593 "cntlid": 39, 00:12:23.593 "listen_address": { 00:12:23.593 "adrfam": "IPv4", 00:12:23.593 "traddr": "10.0.0.2", 00:12:23.593 "trsvcid": "4420", 00:12:23.593 "trtype": "TCP" 00:12:23.593 }, 00:12:23.593 "peer_address": { 00:12:23.593 "adrfam": "IPv4", 00:12:23.593 "traddr": "10.0.0.1", 00:12:23.593 "trsvcid": "35226", 00:12:23.593 "trtype": "TCP" 00:12:23.593 }, 00:12:23.593 "qid": 0, 00:12:23.593 "state": "enabled", 00:12:23.593 "thread": "nvmf_tgt_poll_group_000" 00:12:23.593 } 00:12:23.593 ]' 00:12:23.593 19:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:23.850 19:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:23.850 19:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:23.850 19:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:23.850 19:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:23.850 19:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.850 19:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.850 19:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.108 19:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:03:Mjk1YmIzOTA5YzU1MGMyMTE3M2RkYjg2MzJmOGZmM2IwZDNlMWY0NjcyNjYyYTg5ZjVjZThkYWRkNDdiYjQzNHvEqS8=: 00:12:24.674 19:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.932 19:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:24.932 19:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.932 19:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.932 19:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.932 19:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:24.932 19:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:24.932 19:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:24.932 19:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:24.932 19:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:12:24.932 19:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:24.932 19:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:24.932 19:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:24.932 19:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:24.932 19:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.932 19:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.932 19:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.932 19:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.190 19:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.190 19:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.190 19:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.756 00:12:25.756 19:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:25.756 19:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:25.756 19:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.014 19:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.014 19:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.014 19:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.014 19:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.015 19:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.015 19:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:26.015 { 00:12:26.015 "auth": { 00:12:26.015 "dhgroup": "ffdhe8192", 00:12:26.015 "digest": "sha256", 00:12:26.015 "state": "completed" 00:12:26.015 }, 00:12:26.015 "cntlid": 41, 00:12:26.015 "listen_address": { 00:12:26.015 "adrfam": "IPv4", 00:12:26.015 "traddr": "10.0.0.2", 00:12:26.015 "trsvcid": "4420", 00:12:26.015 "trtype": "TCP" 00:12:26.015 }, 00:12:26.015 "peer_address": { 00:12:26.015 "adrfam": "IPv4", 00:12:26.015 "traddr": "10.0.0.1", 00:12:26.015 "trsvcid": "35252", 00:12:26.015 "trtype": "TCP" 00:12:26.015 }, 00:12:26.015 "qid": 0, 00:12:26.015 "state": "enabled", 00:12:26.015 "thread": "nvmf_tgt_poll_group_000" 00:12:26.015 } 00:12:26.015 ]' 00:12:26.015 19:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:26.015 19:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:26.015 19:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:26.015 19:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:26.015 19:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:26.273 19:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.273 19:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.273 19:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.531 19:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:00:MzUyMzVhYjY5NmJkODhiMTlkZTZiYzY5YmM5MTJlNTNmOTYyYzVlMTM4ODgzMDBl0qszYA==: --dhchap-ctrl-secret DHHC-1:03:NjcwMmM0ODAxNTNkYWRhNzRmMGU4MWVmN2E4MmMxY2MwOWJhMjY3ODY0ZGVmNTA2NzEzNDMwMTk5ZWYwODZmMxfCLFY=: 00:12:27.098 19:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.098 19:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:27.098 19:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.098 19:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.098 19:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.098 19:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:27.098 19:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:27.098 19:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:27.357 19:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:12:27.357 19:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:27.357 19:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:27.357 19:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:27.357 19:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:27.357 19:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.357 19:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.357 19:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.357 19:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.357 19:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.357 19:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.357 19:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.925 00:12:27.925 19:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:27.925 19:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:27.925 19:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.493 19:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.493 19:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.493 19:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.493 19:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.493 19:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.493 19:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:28.493 { 00:12:28.493 "auth": { 00:12:28.493 "dhgroup": "ffdhe8192", 00:12:28.493 "digest": "sha256", 00:12:28.493 "state": "completed" 00:12:28.493 }, 00:12:28.493 "cntlid": 43, 00:12:28.493 "listen_address": { 00:12:28.493 "adrfam": "IPv4", 00:12:28.493 "traddr": "10.0.0.2", 00:12:28.493 "trsvcid": "4420", 00:12:28.493 "trtype": "TCP" 00:12:28.493 }, 00:12:28.493 "peer_address": { 00:12:28.493 "adrfam": "IPv4", 00:12:28.493 "traddr": "10.0.0.1", 00:12:28.493 "trsvcid": "54026", 00:12:28.493 "trtype": "TCP" 00:12:28.493 }, 00:12:28.493 "qid": 0, 00:12:28.493 "state": "enabled", 00:12:28.493 "thread": "nvmf_tgt_poll_group_000" 00:12:28.493 } 00:12:28.493 ]' 00:12:28.493 19:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:28.493 19:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:28.493 19:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:28.493 19:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:28.493 19:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:28.493 19:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.493 19:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.493 19:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.752 19:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:01:NDhiZDBhM2FmYzE2YmYzMTE3YTk0ODRlNDQ2NDg0NDa9Ifhg: --dhchap-ctrl-secret DHHC-1:02:MTg1MGU1MGNiZTc1YjYyZDA5ODE4ZjU3OGUyYWNjNGQyYWM5YzVhMmUyYjVjYTkzUFeZWg==: 00:12:29.688 19:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.688 19:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:29.688 19:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.688 19:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.688 19:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.688 19:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:29.688 19:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:29.688 19:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:29.946 19:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:12:29.946 19:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.946 19:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:29.946 19:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:29.946 19:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:29.946 19:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.946 19:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.946 19:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.946 19:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.946 19:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.946 19:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.946 19:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.879 00:12:30.879 19:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.879 19:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.879 19:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.137 19:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.137 19:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.137 19:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.137 19:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.137 19:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.137 19:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.137 { 00:12:31.137 "auth": { 00:12:31.137 "dhgroup": "ffdhe8192", 00:12:31.137 "digest": "sha256", 00:12:31.137 "state": "completed" 00:12:31.137 }, 00:12:31.137 "cntlid": 45, 00:12:31.137 "listen_address": { 00:12:31.137 "adrfam": "IPv4", 00:12:31.137 "traddr": "10.0.0.2", 00:12:31.137 "trsvcid": "4420", 00:12:31.137 "trtype": "TCP" 00:12:31.137 }, 00:12:31.137 "peer_address": { 00:12:31.137 "adrfam": "IPv4", 00:12:31.137 "traddr": "10.0.0.1", 00:12:31.137 "trsvcid": "54058", 00:12:31.137 "trtype": "TCP" 00:12:31.137 }, 00:12:31.137 "qid": 0, 00:12:31.137 "state": "enabled", 00:12:31.137 "thread": "nvmf_tgt_poll_group_000" 00:12:31.137 } 00:12:31.137 ]' 00:12:31.137 19:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.137 19:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:31.137 19:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.137 19:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:31.137 19:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.137 19:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.137 19:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.137 19:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.396 19:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:02:NzQzNmExZGM4NTFlOTFlMmJkZmI4MjU5NDJkMzZjMTE2MDUyNmMwNmY2NjRkNjlkmucOkA==: --dhchap-ctrl-secret DHHC-1:01:ODdjZTM4ZmE2MzlhMDliNzIwYjNlY2RhNGY2MWMwMDH5v6We: 00:12:32.329 19:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.329 19:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:32.329 19:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.329 19:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.329 19:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.329 19:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:32.329 19:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:32.329 19:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:32.587 19:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:12:32.587 19:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:32.587 19:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:32.587 19:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:32.587 19:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:32.587 19:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.587 19:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:12:32.587 19:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.587 19:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.587 19:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.587 19:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:32.587 19:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:33.153 00:12:33.411 19:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:33.411 19:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.411 19:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:33.669 19:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.669 19:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.669 19:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.669 19:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.669 19:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.669 19:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:33.669 { 00:12:33.669 "auth": { 00:12:33.669 "dhgroup": "ffdhe8192", 00:12:33.669 "digest": "sha256", 00:12:33.669 "state": "completed" 00:12:33.669 }, 00:12:33.669 "cntlid": 47, 00:12:33.669 "listen_address": { 00:12:33.669 "adrfam": "IPv4", 00:12:33.669 "traddr": "10.0.0.2", 00:12:33.669 "trsvcid": "4420", 00:12:33.669 "trtype": "TCP" 00:12:33.669 }, 00:12:33.669 "peer_address": { 00:12:33.669 "adrfam": "IPv4", 00:12:33.669 "traddr": "10.0.0.1", 00:12:33.669 "trsvcid": "54078", 00:12:33.669 "trtype": "TCP" 00:12:33.669 }, 00:12:33.669 "qid": 0, 00:12:33.669 "state": "enabled", 00:12:33.669 "thread": "nvmf_tgt_poll_group_000" 00:12:33.669 } 00:12:33.669 ]' 00:12:33.669 19:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:33.669 19:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:33.669 19:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:33.669 19:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:33.669 19:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:33.669 19:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.670 19:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.670 19:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.236 19:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:03:Mjk1YmIzOTA5YzU1MGMyMTE3M2RkYjg2MzJmOGZmM2IwZDNlMWY0NjcyNjYyYTg5ZjVjZThkYWRkNDdiYjQzNHvEqS8=: 00:12:34.803 19:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.803 19:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:34.803 19:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.803 19:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.803 19:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.803 19:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:34.803 19:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:34.803 19:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:34.803 19:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:34.803 19:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:35.061 19:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:12:35.061 19:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:35.061 19:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:35.061 19:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:35.061 19:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:35.061 19:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.061 19:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.061 19:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.061 19:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.061 19:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.061 19:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.061 19:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.321 00:12:35.579 19:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:35.579 19:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.579 19:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:35.839 19:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.839 19:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.839 19:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.839 19:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.839 19:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.839 19:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:35.839 { 00:12:35.839 "auth": { 00:12:35.839 "dhgroup": "null", 00:12:35.839 "digest": "sha384", 00:12:35.839 "state": "completed" 00:12:35.839 }, 00:12:35.839 "cntlid": 49, 00:12:35.839 "listen_address": { 00:12:35.839 "adrfam": "IPv4", 00:12:35.839 "traddr": "10.0.0.2", 00:12:35.839 "trsvcid": "4420", 00:12:35.839 "trtype": "TCP" 00:12:35.839 }, 00:12:35.839 "peer_address": { 00:12:35.839 "adrfam": "IPv4", 00:12:35.839 "traddr": "10.0.0.1", 00:12:35.839 "trsvcid": "54110", 00:12:35.839 "trtype": "TCP" 00:12:35.839 }, 00:12:35.839 "qid": 0, 00:12:35.839 "state": "enabled", 00:12:35.839 "thread": "nvmf_tgt_poll_group_000" 00:12:35.839 } 00:12:35.839 ]' 00:12:35.839 19:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:35.839 19:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:35.839 19:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:35.839 19:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:35.839 19:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:35.839 19:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.839 19:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.839 19:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.402 19:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:00:MzUyMzVhYjY5NmJkODhiMTlkZTZiYzY5YmM5MTJlNTNmOTYyYzVlMTM4ODgzMDBl0qszYA==: --dhchap-ctrl-secret DHHC-1:03:NjcwMmM0ODAxNTNkYWRhNzRmMGU4MWVmN2E4MmMxY2MwOWJhMjY3ODY0ZGVmNTA2NzEzNDMwMTk5ZWYwODZmMxfCLFY=: 00:12:36.979 19:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.979 19:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:36.979 19:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.979 19:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.979 19:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.979 19:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:36.979 19:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:36.979 19:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:37.238 19:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:12:37.238 19:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:37.238 19:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:37.238 19:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:37.238 19:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:37.238 19:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.238 19:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.238 19:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.238 19:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.238 19:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.238 19:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.238 19:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.805 00:12:37.805 19:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:37.805 19:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.805 19:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:38.064 19:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.064 19:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.064 19:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.064 19:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.064 19:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.064 19:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.064 { 00:12:38.064 "auth": { 00:12:38.064 "dhgroup": "null", 00:12:38.064 "digest": "sha384", 00:12:38.064 "state": "completed" 00:12:38.064 }, 00:12:38.064 "cntlid": 51, 00:12:38.064 "listen_address": { 00:12:38.064 "adrfam": "IPv4", 00:12:38.064 "traddr": "10.0.0.2", 00:12:38.064 "trsvcid": "4420", 00:12:38.064 "trtype": "TCP" 00:12:38.064 }, 00:12:38.064 "peer_address": { 00:12:38.064 "adrfam": "IPv4", 00:12:38.064 "traddr": "10.0.0.1", 00:12:38.064 "trsvcid": "45416", 00:12:38.064 "trtype": "TCP" 00:12:38.064 }, 00:12:38.064 "qid": 0, 00:12:38.064 "state": "enabled", 00:12:38.064 "thread": "nvmf_tgt_poll_group_000" 00:12:38.064 } 00:12:38.064 ]' 00:12:38.064 19:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.064 19:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:38.064 19:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:38.064 19:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:38.064 19:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:38.064 19:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.064 19:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.064 19:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.322 19:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:01:NDhiZDBhM2FmYzE2YmYzMTE3YTk0ODRlNDQ2NDg0NDa9Ifhg: --dhchap-ctrl-secret DHHC-1:02:MTg1MGU1MGNiZTc1YjYyZDA5ODE4ZjU3OGUyYWNjNGQyYWM5YzVhMmUyYjVjYTkzUFeZWg==: 00:12:39.252 19:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.252 19:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:39.252 19:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.252 19:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.252 19:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.252 19:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:39.252 19:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:39.252 19:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:39.510 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:12:39.510 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.510 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:39.510 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:39.510 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:39.510 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.510 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.510 19:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.510 19:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.510 19:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.510 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.510 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.075 00:12:40.075 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:40.075 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:40.075 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.334 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.334 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.334 19:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.334 19:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.334 19:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.334 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:40.334 { 00:12:40.334 "auth": { 00:12:40.334 "dhgroup": "null", 00:12:40.334 "digest": "sha384", 00:12:40.334 "state": "completed" 00:12:40.334 }, 00:12:40.334 "cntlid": 53, 00:12:40.334 "listen_address": { 00:12:40.334 "adrfam": "IPv4", 00:12:40.334 "traddr": "10.0.0.2", 00:12:40.334 "trsvcid": "4420", 00:12:40.334 "trtype": "TCP" 00:12:40.334 }, 00:12:40.334 "peer_address": { 00:12:40.334 "adrfam": "IPv4", 00:12:40.334 "traddr": "10.0.0.1", 00:12:40.334 "trsvcid": "45448", 00:12:40.334 "trtype": "TCP" 00:12:40.334 }, 00:12:40.334 "qid": 0, 00:12:40.334 "state": "enabled", 00:12:40.334 "thread": "nvmf_tgt_poll_group_000" 00:12:40.334 } 00:12:40.334 ]' 00:12:40.334 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:40.334 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:40.334 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:40.334 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:40.334 19:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.334 19:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.334 19:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.334 19:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.591 19:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:02:NzQzNmExZGM4NTFlOTFlMmJkZmI4MjU5NDJkMzZjMTE2MDUyNmMwNmY2NjRkNjlkmucOkA==: --dhchap-ctrl-secret DHHC-1:01:ODdjZTM4ZmE2MzlhMDliNzIwYjNlY2RhNGY2MWMwMDH5v6We: 00:12:41.551 19:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.551 19:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:41.551 19:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.551 19:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.551 19:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.551 19:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.551 19:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:41.551 19:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:41.807 19:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:12:41.807 19:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.807 19:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:41.807 19:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:41.807 19:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:41.807 19:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.807 19:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:12:41.807 19:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.807 19:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.807 19:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.808 19:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:41.808 19:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:42.065 00:12:42.323 19:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:42.323 19:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:42.323 19:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.581 19:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.581 19:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.581 19:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.581 19:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.581 19:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.581 19:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:42.581 { 00:12:42.581 "auth": { 00:12:42.581 "dhgroup": "null", 00:12:42.581 "digest": "sha384", 00:12:42.581 "state": "completed" 00:12:42.581 }, 00:12:42.581 "cntlid": 55, 00:12:42.581 "listen_address": { 00:12:42.581 "adrfam": "IPv4", 00:12:42.581 "traddr": "10.0.0.2", 00:12:42.581 "trsvcid": "4420", 00:12:42.581 "trtype": "TCP" 00:12:42.581 }, 00:12:42.581 "peer_address": { 00:12:42.581 "adrfam": "IPv4", 00:12:42.581 "traddr": "10.0.0.1", 00:12:42.581 "trsvcid": "45468", 00:12:42.581 "trtype": "TCP" 00:12:42.581 }, 00:12:42.581 "qid": 0, 00:12:42.581 "state": "enabled", 00:12:42.581 "thread": "nvmf_tgt_poll_group_000" 00:12:42.581 } 00:12:42.581 ]' 00:12:42.581 19:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:42.581 19:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:42.581 19:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:42.581 19:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:42.581 19:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:42.581 19:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.581 19:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.581 19:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.840 19:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:03:Mjk1YmIzOTA5YzU1MGMyMTE3M2RkYjg2MzJmOGZmM2IwZDNlMWY0NjcyNjYyYTg5ZjVjZThkYWRkNDdiYjQzNHvEqS8=: 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.772 19:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.337 00:12:44.337 19:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:44.337 19:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.337 19:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:44.595 19:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.595 19:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.595 19:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.595 19:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.595 19:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.595 19:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:44.595 { 00:12:44.595 "auth": { 00:12:44.595 "dhgroup": "ffdhe2048", 00:12:44.595 "digest": "sha384", 00:12:44.595 "state": "completed" 00:12:44.595 }, 00:12:44.595 "cntlid": 57, 00:12:44.595 "listen_address": { 00:12:44.595 "adrfam": "IPv4", 00:12:44.595 "traddr": "10.0.0.2", 00:12:44.595 "trsvcid": "4420", 00:12:44.595 "trtype": "TCP" 00:12:44.595 }, 00:12:44.595 "peer_address": { 00:12:44.595 "adrfam": "IPv4", 00:12:44.595 "traddr": "10.0.0.1", 00:12:44.595 "trsvcid": "45494", 00:12:44.595 "trtype": "TCP" 00:12:44.595 }, 00:12:44.595 "qid": 0, 00:12:44.595 "state": "enabled", 00:12:44.595 "thread": "nvmf_tgt_poll_group_000" 00:12:44.595 } 00:12:44.595 ]' 00:12:44.595 19:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:44.595 19:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:44.595 19:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:44.595 19:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:44.595 19:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:44.595 19:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.595 19:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.595 19:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.853 19:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:00:MzUyMzVhYjY5NmJkODhiMTlkZTZiYzY5YmM5MTJlNTNmOTYyYzVlMTM4ODgzMDBl0qszYA==: --dhchap-ctrl-secret DHHC-1:03:NjcwMmM0ODAxNTNkYWRhNzRmMGU4MWVmN2E4MmMxY2MwOWJhMjY3ODY0ZGVmNTA2NzEzNDMwMTk5ZWYwODZmMxfCLFY=: 00:12:45.791 19:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.791 19:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:45.791 19:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.791 19:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.791 19:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.791 19:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:45.791 19:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:45.791 19:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:46.050 19:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:12:46.050 19:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:46.050 19:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:46.050 19:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:46.050 19:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:46.050 19:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.050 19:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.050 19:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.050 19:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.050 19:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.050 19:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.050 19:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.659 00:12:46.659 19:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:46.659 19:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:46.659 19:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.917 19:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.917 19:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.917 19:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.917 19:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.917 19:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.917 19:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:46.917 { 00:12:46.917 "auth": { 00:12:46.917 "dhgroup": "ffdhe2048", 00:12:46.917 "digest": "sha384", 00:12:46.917 "state": "completed" 00:12:46.917 }, 00:12:46.917 "cntlid": 59, 00:12:46.917 "listen_address": { 00:12:46.917 "adrfam": "IPv4", 00:12:46.917 "traddr": "10.0.0.2", 00:12:46.917 "trsvcid": "4420", 00:12:46.917 "trtype": "TCP" 00:12:46.917 }, 00:12:46.917 "peer_address": { 00:12:46.917 "adrfam": "IPv4", 00:12:46.917 "traddr": "10.0.0.1", 00:12:46.917 "trsvcid": "45522", 00:12:46.917 "trtype": "TCP" 00:12:46.917 }, 00:12:46.917 "qid": 0, 00:12:46.917 "state": "enabled", 00:12:46.917 "thread": "nvmf_tgt_poll_group_000" 00:12:46.917 } 00:12:46.917 ]' 00:12:46.917 19:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:46.917 19:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:46.917 19:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:46.917 19:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:46.917 19:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:46.917 19:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.917 19:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.917 19:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.484 19:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:01:NDhiZDBhM2FmYzE2YmYzMTE3YTk0ODRlNDQ2NDg0NDa9Ifhg: --dhchap-ctrl-secret DHHC-1:02:MTg1MGU1MGNiZTc1YjYyZDA5ODE4ZjU3OGUyYWNjNGQyYWM5YzVhMmUyYjVjYTkzUFeZWg==: 00:12:48.048 19:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.048 19:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:48.048 19:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.048 19:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.048 19:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.048 19:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:48.048 19:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:48.048 19:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:48.305 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:12:48.305 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:48.305 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:48.305 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:48.305 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:48.305 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.305 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.305 19:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.305 19:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.305 19:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.305 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.305 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.563 00:12:48.820 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:48.820 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.820 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:49.078 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.078 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.078 19:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.078 19:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.078 19:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.078 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:49.078 { 00:12:49.078 "auth": { 00:12:49.078 "dhgroup": "ffdhe2048", 00:12:49.078 "digest": "sha384", 00:12:49.078 "state": "completed" 00:12:49.078 }, 00:12:49.078 "cntlid": 61, 00:12:49.078 "listen_address": { 00:12:49.078 "adrfam": "IPv4", 00:12:49.078 "traddr": "10.0.0.2", 00:12:49.078 "trsvcid": "4420", 00:12:49.078 "trtype": "TCP" 00:12:49.078 }, 00:12:49.078 "peer_address": { 00:12:49.078 "adrfam": "IPv4", 00:12:49.078 "traddr": "10.0.0.1", 00:12:49.078 "trsvcid": "35918", 00:12:49.078 "trtype": "TCP" 00:12:49.078 }, 00:12:49.078 "qid": 0, 00:12:49.078 "state": "enabled", 00:12:49.078 "thread": "nvmf_tgt_poll_group_000" 00:12:49.078 } 00:12:49.078 ]' 00:12:49.078 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:49.078 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:49.078 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:49.078 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:49.078 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:49.078 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.078 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.078 19:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.336 19:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:02:NzQzNmExZGM4NTFlOTFlMmJkZmI4MjU5NDJkMzZjMTE2MDUyNmMwNmY2NjRkNjlkmucOkA==: --dhchap-ctrl-secret DHHC-1:01:ODdjZTM4ZmE2MzlhMDliNzIwYjNlY2RhNGY2MWMwMDH5v6We: 00:12:50.268 19:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.268 19:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:50.268 19:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.268 19:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.268 19:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.268 19:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:50.268 19:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:50.268 19:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:50.525 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:12:50.525 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:50.525 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:50.525 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:50.525 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:50.525 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.525 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:12:50.525 19:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.525 19:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.525 19:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.525 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:50.525 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:50.782 00:12:50.782 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:50.782 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:50.782 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.040 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.040 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.040 19:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.040 19:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.040 19:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.040 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:51.040 { 00:12:51.040 "auth": { 00:12:51.040 "dhgroup": "ffdhe2048", 00:12:51.040 "digest": "sha384", 00:12:51.040 "state": "completed" 00:12:51.040 }, 00:12:51.040 "cntlid": 63, 00:12:51.040 "listen_address": { 00:12:51.040 "adrfam": "IPv4", 00:12:51.040 "traddr": "10.0.0.2", 00:12:51.040 "trsvcid": "4420", 00:12:51.040 "trtype": "TCP" 00:12:51.040 }, 00:12:51.040 "peer_address": { 00:12:51.040 "adrfam": "IPv4", 00:12:51.040 "traddr": "10.0.0.1", 00:12:51.040 "trsvcid": "35950", 00:12:51.040 "trtype": "TCP" 00:12:51.040 }, 00:12:51.040 "qid": 0, 00:12:51.040 "state": "enabled", 00:12:51.040 "thread": "nvmf_tgt_poll_group_000" 00:12:51.040 } 00:12:51.040 ]' 00:12:51.040 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:51.298 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:51.298 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:51.298 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:51.298 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:51.298 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.298 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.298 19:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.555 19:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:03:Mjk1YmIzOTA5YzU1MGMyMTE3M2RkYjg2MzJmOGZmM2IwZDNlMWY0NjcyNjYyYTg5ZjVjZThkYWRkNDdiYjQzNHvEqS8=: 00:12:52.488 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.489 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:52.489 19:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.489 19:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.489 19:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.489 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:52.489 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:52.489 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:52.489 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:52.746 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:12:52.746 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:52.746 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:52.746 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:52.746 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:52.746 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.746 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.746 19:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.746 19:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.746 19:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.746 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.746 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.004 00:12:53.004 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:53.004 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:53.004 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.263 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.263 19:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.263 19:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.263 19:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.263 19:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.263 19:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:53.263 { 00:12:53.263 "auth": { 00:12:53.263 "dhgroup": "ffdhe3072", 00:12:53.263 "digest": "sha384", 00:12:53.263 "state": "completed" 00:12:53.263 }, 00:12:53.263 "cntlid": 65, 00:12:53.263 "listen_address": { 00:12:53.263 "adrfam": "IPv4", 00:12:53.263 "traddr": "10.0.0.2", 00:12:53.263 "trsvcid": "4420", 00:12:53.263 "trtype": "TCP" 00:12:53.263 }, 00:12:53.263 "peer_address": { 00:12:53.263 "adrfam": "IPv4", 00:12:53.263 "traddr": "10.0.0.1", 00:12:53.263 "trsvcid": "35986", 00:12:53.263 "trtype": "TCP" 00:12:53.263 }, 00:12:53.263 "qid": 0, 00:12:53.263 "state": "enabled", 00:12:53.263 "thread": "nvmf_tgt_poll_group_000" 00:12:53.263 } 00:12:53.263 ]' 00:12:53.263 19:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:53.263 19:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:53.263 19:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:53.521 19:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:53.521 19:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:53.521 19:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.521 19:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.521 19:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.779 19:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:00:MzUyMzVhYjY5NmJkODhiMTlkZTZiYzY5YmM5MTJlNTNmOTYyYzVlMTM4ODgzMDBl0qszYA==: --dhchap-ctrl-secret DHHC-1:03:NjcwMmM0ODAxNTNkYWRhNzRmMGU4MWVmN2E4MmMxY2MwOWJhMjY3ODY0ZGVmNTA2NzEzNDMwMTk5ZWYwODZmMxfCLFY=: 00:12:54.714 19:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.714 19:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:54.714 19:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.714 19:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.715 19:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.715 19:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:54.715 19:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:54.715 19:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:54.973 19:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:12:54.973 19:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:54.973 19:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:54.973 19:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:54.973 19:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:54.973 19:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.973 19:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.973 19:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.973 19:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.973 19:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.973 19:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.973 19:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.537 00:12:55.537 19:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:55.537 19:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.537 19:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:55.794 19:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.794 19:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.794 19:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.794 19:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.794 19:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.794 19:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:55.794 { 00:12:55.794 "auth": { 00:12:55.794 "dhgroup": "ffdhe3072", 00:12:55.794 "digest": "sha384", 00:12:55.794 "state": "completed" 00:12:55.795 }, 00:12:55.795 "cntlid": 67, 00:12:55.795 "listen_address": { 00:12:55.795 "adrfam": "IPv4", 00:12:55.795 "traddr": "10.0.0.2", 00:12:55.795 "trsvcid": "4420", 00:12:55.795 "trtype": "TCP" 00:12:55.795 }, 00:12:55.795 "peer_address": { 00:12:55.795 "adrfam": "IPv4", 00:12:55.795 "traddr": "10.0.0.1", 00:12:55.795 "trsvcid": "36008", 00:12:55.795 "trtype": "TCP" 00:12:55.795 }, 00:12:55.795 "qid": 0, 00:12:55.795 "state": "enabled", 00:12:55.795 "thread": "nvmf_tgt_poll_group_000" 00:12:55.795 } 00:12:55.795 ]' 00:12:55.795 19:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:55.795 19:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:55.795 19:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:56.053 19:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:56.053 19:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:56.053 19:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.053 19:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.053 19:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.312 19:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:01:NDhiZDBhM2FmYzE2YmYzMTE3YTk0ODRlNDQ2NDg0NDa9Ifhg: --dhchap-ctrl-secret DHHC-1:02:MTg1MGU1MGNiZTc1YjYyZDA5ODE4ZjU3OGUyYWNjNGQyYWM5YzVhMmUyYjVjYTkzUFeZWg==: 00:12:56.923 19:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.923 19:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:56.923 19:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.923 19:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.181 19:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.181 19:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:57.181 19:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:57.181 19:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:57.439 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:12:57.439 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:57.439 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:57.439 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:57.439 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:57.439 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.439 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.439 19:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.439 19:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.439 19:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.439 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.439 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.697 00:12:57.697 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:57.697 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:57.697 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.956 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.956 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.956 19:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.956 19:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.956 19:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.956 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:57.956 { 00:12:57.956 "auth": { 00:12:57.956 "dhgroup": "ffdhe3072", 00:12:57.956 "digest": "sha384", 00:12:57.956 "state": "completed" 00:12:57.956 }, 00:12:57.956 "cntlid": 69, 00:12:57.956 "listen_address": { 00:12:57.956 "adrfam": "IPv4", 00:12:57.956 "traddr": "10.0.0.2", 00:12:57.956 "trsvcid": "4420", 00:12:57.956 "trtype": "TCP" 00:12:57.956 }, 00:12:57.956 "peer_address": { 00:12:57.956 "adrfam": "IPv4", 00:12:57.956 "traddr": "10.0.0.1", 00:12:57.956 "trsvcid": "36408", 00:12:57.956 "trtype": "TCP" 00:12:57.956 }, 00:12:57.956 "qid": 0, 00:12:57.956 "state": "enabled", 00:12:57.956 "thread": "nvmf_tgt_poll_group_000" 00:12:57.956 } 00:12:57.956 ]' 00:12:57.956 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:57.956 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:57.956 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:58.215 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:58.215 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:58.215 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.215 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.215 19:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.473 19:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:02:NzQzNmExZGM4NTFlOTFlMmJkZmI4MjU5NDJkMzZjMTE2MDUyNmMwNmY2NjRkNjlkmucOkA==: --dhchap-ctrl-secret DHHC-1:01:ODdjZTM4ZmE2MzlhMDliNzIwYjNlY2RhNGY2MWMwMDH5v6We: 00:12:59.407 19:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.407 19:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:12:59.407 19:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.407 19:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.407 19:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.407 19:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:59.407 19:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:59.407 19:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:59.407 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:12:59.407 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:59.407 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:59.407 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:59.407 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:59.407 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.407 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:12:59.407 19:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.407 19:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.407 19:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.407 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:59.407 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:59.974 00:12:59.974 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:59.974 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.974 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:59.974 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.974 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.974 19:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.974 19:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.974 19:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.974 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:59.974 { 00:12:59.974 "auth": { 00:12:59.974 "dhgroup": "ffdhe3072", 00:12:59.974 "digest": "sha384", 00:12:59.974 "state": "completed" 00:12:59.974 }, 00:12:59.974 "cntlid": 71, 00:12:59.974 "listen_address": { 00:12:59.974 "adrfam": "IPv4", 00:12:59.974 "traddr": "10.0.0.2", 00:12:59.974 "trsvcid": "4420", 00:12:59.974 "trtype": "TCP" 00:12:59.974 }, 00:12:59.974 "peer_address": { 00:12:59.974 "adrfam": "IPv4", 00:12:59.974 "traddr": "10.0.0.1", 00:12:59.974 "trsvcid": "36436", 00:12:59.974 "trtype": "TCP" 00:12:59.974 }, 00:12:59.974 "qid": 0, 00:12:59.974 "state": "enabled", 00:12:59.974 "thread": "nvmf_tgt_poll_group_000" 00:12:59.974 } 00:12:59.974 ]' 00:12:59.974 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:00.232 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:00.232 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:00.232 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:00.232 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:00.232 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.232 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.232 19:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.491 19:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:03:Mjk1YmIzOTA5YzU1MGMyMTE3M2RkYjg2MzJmOGZmM2IwZDNlMWY0NjcyNjYyYTg5ZjVjZThkYWRkNDdiYjQzNHvEqS8=: 00:13:01.425 19:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.425 19:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:01.425 19:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.425 19:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.425 19:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.425 19:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:01.425 19:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:01.425 19:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:01.425 19:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:01.425 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:13:01.425 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:01.425 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:01.425 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:01.425 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:01.425 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.425 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.425 19:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.425 19:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.425 19:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.425 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.425 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.990 00:13:01.990 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:01.990 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:01.990 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.250 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.250 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.250 19:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.250 19:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.250 19:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.250 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:02.250 { 00:13:02.250 "auth": { 00:13:02.250 "dhgroup": "ffdhe4096", 00:13:02.250 "digest": "sha384", 00:13:02.250 "state": "completed" 00:13:02.250 }, 00:13:02.250 "cntlid": 73, 00:13:02.250 "listen_address": { 00:13:02.250 "adrfam": "IPv4", 00:13:02.250 "traddr": "10.0.0.2", 00:13:02.250 "trsvcid": "4420", 00:13:02.250 "trtype": "TCP" 00:13:02.250 }, 00:13:02.250 "peer_address": { 00:13:02.250 "adrfam": "IPv4", 00:13:02.250 "traddr": "10.0.0.1", 00:13:02.250 "trsvcid": "36470", 00:13:02.250 "trtype": "TCP" 00:13:02.250 }, 00:13:02.250 "qid": 0, 00:13:02.250 "state": "enabled", 00:13:02.250 "thread": "nvmf_tgt_poll_group_000" 00:13:02.250 } 00:13:02.250 ]' 00:13:02.250 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:02.250 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:02.250 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:02.250 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:02.250 19:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:02.250 19:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.250 19:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.250 19:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.508 19:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:00:MzUyMzVhYjY5NmJkODhiMTlkZTZiYzY5YmM5MTJlNTNmOTYyYzVlMTM4ODgzMDBl0qszYA==: --dhchap-ctrl-secret DHHC-1:03:NjcwMmM0ODAxNTNkYWRhNzRmMGU4MWVmN2E4MmMxY2MwOWJhMjY3ODY0ZGVmNTA2NzEzNDMwMTk5ZWYwODZmMxfCLFY=: 00:13:03.441 19:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.441 19:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:03.441 19:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.441 19:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.441 19:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.441 19:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:03.441 19:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:03.441 19:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:03.698 19:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:13:03.698 19:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:03.698 19:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:03.698 19:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:03.698 19:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:03.698 19:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.698 19:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.698 19:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.698 19:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.698 19:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.698 19:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.698 19:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.955 00:13:03.955 19:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:03.955 19:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:03.955 19:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.521 19:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.521 19:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.521 19:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.521 19:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.521 19:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.521 19:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:04.521 { 00:13:04.521 "auth": { 00:13:04.522 "dhgroup": "ffdhe4096", 00:13:04.522 "digest": "sha384", 00:13:04.522 "state": "completed" 00:13:04.522 }, 00:13:04.522 "cntlid": 75, 00:13:04.522 "listen_address": { 00:13:04.522 "adrfam": "IPv4", 00:13:04.522 "traddr": "10.0.0.2", 00:13:04.522 "trsvcid": "4420", 00:13:04.522 "trtype": "TCP" 00:13:04.522 }, 00:13:04.522 "peer_address": { 00:13:04.522 "adrfam": "IPv4", 00:13:04.522 "traddr": "10.0.0.1", 00:13:04.522 "trsvcid": "36494", 00:13:04.522 "trtype": "TCP" 00:13:04.522 }, 00:13:04.522 "qid": 0, 00:13:04.522 "state": "enabled", 00:13:04.522 "thread": "nvmf_tgt_poll_group_000" 00:13:04.522 } 00:13:04.522 ]' 00:13:04.522 19:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:04.522 19:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:04.522 19:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:04.522 19:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:04.522 19:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:04.522 19:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.522 19:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.522 19:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.779 19:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:01:NDhiZDBhM2FmYzE2YmYzMTE3YTk0ODRlNDQ2NDg0NDa9Ifhg: --dhchap-ctrl-secret DHHC-1:02:MTg1MGU1MGNiZTc1YjYyZDA5ODE4ZjU3OGUyYWNjNGQyYWM5YzVhMmUyYjVjYTkzUFeZWg==: 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.734 19:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.297 00:13:06.297 19:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:06.297 19:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:06.297 19:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.554 19:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.554 19:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.554 19:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.554 19:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.554 19:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.554 19:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:06.554 { 00:13:06.554 "auth": { 00:13:06.554 "dhgroup": "ffdhe4096", 00:13:06.554 "digest": "sha384", 00:13:06.554 "state": "completed" 00:13:06.554 }, 00:13:06.554 "cntlid": 77, 00:13:06.554 "listen_address": { 00:13:06.554 "adrfam": "IPv4", 00:13:06.554 "traddr": "10.0.0.2", 00:13:06.554 "trsvcid": "4420", 00:13:06.554 "trtype": "TCP" 00:13:06.554 }, 00:13:06.554 "peer_address": { 00:13:06.554 "adrfam": "IPv4", 00:13:06.554 "traddr": "10.0.0.1", 00:13:06.554 "trsvcid": "36512", 00:13:06.554 "trtype": "TCP" 00:13:06.554 }, 00:13:06.554 "qid": 0, 00:13:06.554 "state": "enabled", 00:13:06.554 "thread": "nvmf_tgt_poll_group_000" 00:13:06.554 } 00:13:06.554 ]' 00:13:06.554 19:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:06.554 19:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:06.554 19:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:06.554 19:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:06.554 19:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:06.554 19:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.554 19:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.554 19:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.849 19:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:02:NzQzNmExZGM4NTFlOTFlMmJkZmI4MjU5NDJkMzZjMTE2MDUyNmMwNmY2NjRkNjlkmucOkA==: --dhchap-ctrl-secret DHHC-1:01:ODdjZTM4ZmE2MzlhMDliNzIwYjNlY2RhNGY2MWMwMDH5v6We: 00:13:07.431 19:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.431 19:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:07.431 19:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.431 19:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.431 19:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.431 19:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:07.431 19:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:07.431 19:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:07.996 19:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:13:07.996 19:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:07.996 19:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:07.996 19:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:07.996 19:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:07.996 19:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.996 19:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:13:07.996 19:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.996 19:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.996 19:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.996 19:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:07.996 19:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:08.254 00:13:08.254 19:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:08.254 19:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.254 19:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:08.512 19:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.512 19:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.512 19:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.512 19:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.512 19:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.512 19:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:08.512 { 00:13:08.512 "auth": { 00:13:08.512 "dhgroup": "ffdhe4096", 00:13:08.512 "digest": "sha384", 00:13:08.512 "state": "completed" 00:13:08.512 }, 00:13:08.512 "cntlid": 79, 00:13:08.512 "listen_address": { 00:13:08.512 "adrfam": "IPv4", 00:13:08.512 "traddr": "10.0.0.2", 00:13:08.512 "trsvcid": "4420", 00:13:08.512 "trtype": "TCP" 00:13:08.513 }, 00:13:08.513 "peer_address": { 00:13:08.513 "adrfam": "IPv4", 00:13:08.513 "traddr": "10.0.0.1", 00:13:08.513 "trsvcid": "59794", 00:13:08.513 "trtype": "TCP" 00:13:08.513 }, 00:13:08.513 "qid": 0, 00:13:08.513 "state": "enabled", 00:13:08.513 "thread": "nvmf_tgt_poll_group_000" 00:13:08.513 } 00:13:08.513 ]' 00:13:08.513 19:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:08.513 19:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:08.513 19:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:08.770 19:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:08.770 19:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:08.770 19:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.770 19:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.770 19:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.027 19:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:03:Mjk1YmIzOTA5YzU1MGMyMTE3M2RkYjg2MzJmOGZmM2IwZDNlMWY0NjcyNjYyYTg5ZjVjZThkYWRkNDdiYjQzNHvEqS8=: 00:13:09.592 19:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.592 19:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:09.592 19:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.592 19:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.592 19:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.592 19:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:09.592 19:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:09.592 19:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:09.592 19:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:09.879 19:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:13:09.879 19:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:09.879 19:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:09.879 19:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:09.879 19:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:09.879 19:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.880 19:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.880 19:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.880 19:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.880 19:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.880 19:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.880 19:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.446 00:13:10.446 19:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:10.446 19:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.446 19:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:10.704 19:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.704 19:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.704 19:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.704 19:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.704 19:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.704 19:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:10.704 { 00:13:10.704 "auth": { 00:13:10.704 "dhgroup": "ffdhe6144", 00:13:10.704 "digest": "sha384", 00:13:10.704 "state": "completed" 00:13:10.704 }, 00:13:10.704 "cntlid": 81, 00:13:10.704 "listen_address": { 00:13:10.704 "adrfam": "IPv4", 00:13:10.704 "traddr": "10.0.0.2", 00:13:10.704 "trsvcid": "4420", 00:13:10.704 "trtype": "TCP" 00:13:10.704 }, 00:13:10.704 "peer_address": { 00:13:10.704 "adrfam": "IPv4", 00:13:10.704 "traddr": "10.0.0.1", 00:13:10.704 "trsvcid": "59812", 00:13:10.704 "trtype": "TCP" 00:13:10.704 }, 00:13:10.704 "qid": 0, 00:13:10.704 "state": "enabled", 00:13:10.704 "thread": "nvmf_tgt_poll_group_000" 00:13:10.704 } 00:13:10.704 ]' 00:13:10.704 19:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:10.704 19:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:10.704 19:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:10.704 19:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:10.704 19:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:10.962 19:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.962 19:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.962 19:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.220 19:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:00:MzUyMzVhYjY5NmJkODhiMTlkZTZiYzY5YmM5MTJlNTNmOTYyYzVlMTM4ODgzMDBl0qszYA==: --dhchap-ctrl-secret DHHC-1:03:NjcwMmM0ODAxNTNkYWRhNzRmMGU4MWVmN2E4MmMxY2MwOWJhMjY3ODY0ZGVmNTA2NzEzNDMwMTk5ZWYwODZmMxfCLFY=: 00:13:11.786 19:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.786 19:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:11.786 19:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.786 19:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.786 19:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.786 19:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:11.786 19:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:11.786 19:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:12.045 19:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:13:12.045 19:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:12.045 19:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:12.045 19:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:12.045 19:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:12.045 19:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.045 19:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.045 19:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.045 19:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.045 19:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.045 19:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.045 19:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.612 00:13:12.612 19:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:12.612 19:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.612 19:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:12.870 19:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.870 19:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.870 19:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.870 19:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.870 19:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.870 19:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:12.870 { 00:13:12.870 "auth": { 00:13:12.870 "dhgroup": "ffdhe6144", 00:13:12.870 "digest": "sha384", 00:13:12.870 "state": "completed" 00:13:12.870 }, 00:13:12.870 "cntlid": 83, 00:13:12.870 "listen_address": { 00:13:12.870 "adrfam": "IPv4", 00:13:12.870 "traddr": "10.0.0.2", 00:13:12.870 "trsvcid": "4420", 00:13:12.870 "trtype": "TCP" 00:13:12.870 }, 00:13:12.870 "peer_address": { 00:13:12.870 "adrfam": "IPv4", 00:13:12.870 "traddr": "10.0.0.1", 00:13:12.870 "trsvcid": "59846", 00:13:12.870 "trtype": "TCP" 00:13:12.870 }, 00:13:12.870 "qid": 0, 00:13:12.870 "state": "enabled", 00:13:12.870 "thread": "nvmf_tgt_poll_group_000" 00:13:12.870 } 00:13:12.870 ]' 00:13:12.870 19:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:12.870 19:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:12.870 19:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:13.128 19:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:13.128 19:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:13.128 19:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.128 19:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.128 19:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.387 19:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:01:NDhiZDBhM2FmYzE2YmYzMTE3YTk0ODRlNDQ2NDg0NDa9Ifhg: --dhchap-ctrl-secret DHHC-1:02:MTg1MGU1MGNiZTc1YjYyZDA5ODE4ZjU3OGUyYWNjNGQyYWM5YzVhMmUyYjVjYTkzUFeZWg==: 00:13:13.951 19:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.951 19:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:13.951 19:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.951 19:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.951 19:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.951 19:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:13.951 19:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:13.951 19:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:14.514 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:13:14.514 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:14.514 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:14.514 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:14.514 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:14.514 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.514 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.514 19:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.514 19:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.514 19:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.515 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.515 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.773 00:13:14.773 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:14.773 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.773 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:15.031 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.031 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.031 19:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.031 19:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.031 19:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.031 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:15.031 { 00:13:15.031 "auth": { 00:13:15.031 "dhgroup": "ffdhe6144", 00:13:15.031 "digest": "sha384", 00:13:15.031 "state": "completed" 00:13:15.031 }, 00:13:15.031 "cntlid": 85, 00:13:15.031 "listen_address": { 00:13:15.031 "adrfam": "IPv4", 00:13:15.031 "traddr": "10.0.0.2", 00:13:15.031 "trsvcid": "4420", 00:13:15.031 "trtype": "TCP" 00:13:15.031 }, 00:13:15.031 "peer_address": { 00:13:15.031 "adrfam": "IPv4", 00:13:15.031 "traddr": "10.0.0.1", 00:13:15.031 "trsvcid": "59876", 00:13:15.031 "trtype": "TCP" 00:13:15.031 }, 00:13:15.031 "qid": 0, 00:13:15.031 "state": "enabled", 00:13:15.031 "thread": "nvmf_tgt_poll_group_000" 00:13:15.031 } 00:13:15.031 ]' 00:13:15.031 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:15.031 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:15.031 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:15.290 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:15.290 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:15.290 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.290 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.290 19:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.549 19:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:02:NzQzNmExZGM4NTFlOTFlMmJkZmI4MjU5NDJkMzZjMTE2MDUyNmMwNmY2NjRkNjlkmucOkA==: --dhchap-ctrl-secret DHHC-1:01:ODdjZTM4ZmE2MzlhMDliNzIwYjNlY2RhNGY2MWMwMDH5v6We: 00:13:16.115 19:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.115 19:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:16.115 19:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.115 19:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.373 19:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.373 19:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:16.373 19:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:16.373 19:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:16.632 19:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:13:16.632 19:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:16.632 19:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:16.632 19:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:16.632 19:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:16.632 19:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.632 19:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:13:16.632 19:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.632 19:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.632 19:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.632 19:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:16.632 19:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:16.890 00:13:16.890 19:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:16.890 19:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:16.890 19:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.457 19:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.457 19:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.457 19:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.457 19:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.457 19:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.457 19:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.457 { 00:13:17.457 "auth": { 00:13:17.457 "dhgroup": "ffdhe6144", 00:13:17.457 "digest": "sha384", 00:13:17.457 "state": "completed" 00:13:17.457 }, 00:13:17.457 "cntlid": 87, 00:13:17.457 "listen_address": { 00:13:17.457 "adrfam": "IPv4", 00:13:17.457 "traddr": "10.0.0.2", 00:13:17.457 "trsvcid": "4420", 00:13:17.457 "trtype": "TCP" 00:13:17.457 }, 00:13:17.457 "peer_address": { 00:13:17.457 "adrfam": "IPv4", 00:13:17.457 "traddr": "10.0.0.1", 00:13:17.457 "trsvcid": "41866", 00:13:17.457 "trtype": "TCP" 00:13:17.457 }, 00:13:17.457 "qid": 0, 00:13:17.457 "state": "enabled", 00:13:17.457 "thread": "nvmf_tgt_poll_group_000" 00:13:17.457 } 00:13:17.457 ]' 00:13:17.457 19:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.457 19:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:17.457 19:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.457 19:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:17.457 19:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.457 19:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.457 19:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.457 19:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.715 19:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:03:Mjk1YmIzOTA5YzU1MGMyMTE3M2RkYjg2MzJmOGZmM2IwZDNlMWY0NjcyNjYyYTg5ZjVjZThkYWRkNDdiYjQzNHvEqS8=: 00:13:18.650 19:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.650 19:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:18.650 19:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.650 19:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.650 19:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.650 19:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:18.650 19:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:18.650 19:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:18.650 19:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:18.651 19:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:13:18.651 19:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:18.651 19:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:18.651 19:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:18.651 19:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:18.651 19:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.651 19:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.651 19:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.651 19:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.651 19:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.651 19:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.651 19:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.582 00:13:19.582 19:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:19.582 19:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.582 19:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:19.582 19:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.582 19:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.582 19:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.582 19:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.839 19:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.839 19:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:19.839 { 00:13:19.839 "auth": { 00:13:19.839 "dhgroup": "ffdhe8192", 00:13:19.839 "digest": "sha384", 00:13:19.839 "state": "completed" 00:13:19.839 }, 00:13:19.839 "cntlid": 89, 00:13:19.840 "listen_address": { 00:13:19.840 "adrfam": "IPv4", 00:13:19.840 "traddr": "10.0.0.2", 00:13:19.840 "trsvcid": "4420", 00:13:19.840 "trtype": "TCP" 00:13:19.840 }, 00:13:19.840 "peer_address": { 00:13:19.840 "adrfam": "IPv4", 00:13:19.840 "traddr": "10.0.0.1", 00:13:19.840 "trsvcid": "41886", 00:13:19.840 "trtype": "TCP" 00:13:19.840 }, 00:13:19.840 "qid": 0, 00:13:19.840 "state": "enabled", 00:13:19.840 "thread": "nvmf_tgt_poll_group_000" 00:13:19.840 } 00:13:19.840 ]' 00:13:19.840 19:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:19.840 19:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:19.840 19:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:19.840 19:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:19.840 19:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:19.840 19:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.840 19:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.840 19:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.097 19:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:00:MzUyMzVhYjY5NmJkODhiMTlkZTZiYzY5YmM5MTJlNTNmOTYyYzVlMTM4ODgzMDBl0qszYA==: --dhchap-ctrl-secret DHHC-1:03:NjcwMmM0ODAxNTNkYWRhNzRmMGU4MWVmN2E4MmMxY2MwOWJhMjY3ODY0ZGVmNTA2NzEzNDMwMTk5ZWYwODZmMxfCLFY=: 00:13:21.030 19:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.030 19:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:21.030 19:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.030 19:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.030 19:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.030 19:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:21.030 19:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:21.030 19:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:21.288 19:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:13:21.288 19:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:21.288 19:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:21.288 19:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:21.288 19:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:21.288 19:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.288 19:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.288 19:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.288 19:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.288 19:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.288 19:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.288 19:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.911 00:13:21.911 19:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:21.911 19:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:21.911 19:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.169 19:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.169 19:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.169 19:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.169 19:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.169 19:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.169 19:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:22.169 { 00:13:22.169 "auth": { 00:13:22.169 "dhgroup": "ffdhe8192", 00:13:22.169 "digest": "sha384", 00:13:22.169 "state": "completed" 00:13:22.169 }, 00:13:22.169 "cntlid": 91, 00:13:22.169 "listen_address": { 00:13:22.169 "adrfam": "IPv4", 00:13:22.169 "traddr": "10.0.0.2", 00:13:22.169 "trsvcid": "4420", 00:13:22.169 "trtype": "TCP" 00:13:22.169 }, 00:13:22.169 "peer_address": { 00:13:22.169 "adrfam": "IPv4", 00:13:22.169 "traddr": "10.0.0.1", 00:13:22.169 "trsvcid": "41916", 00:13:22.169 "trtype": "TCP" 00:13:22.169 }, 00:13:22.169 "qid": 0, 00:13:22.169 "state": "enabled", 00:13:22.169 "thread": "nvmf_tgt_poll_group_000" 00:13:22.169 } 00:13:22.169 ]' 00:13:22.169 19:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:22.169 19:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:22.169 19:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:22.169 19:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:22.169 19:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:22.169 19:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.169 19:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.169 19:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.428 19:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:01:NDhiZDBhM2FmYzE2YmYzMTE3YTk0ODRlNDQ2NDg0NDa9Ifhg: --dhchap-ctrl-secret DHHC-1:02:MTg1MGU1MGNiZTc1YjYyZDA5ODE4ZjU3OGUyYWNjNGQyYWM5YzVhMmUyYjVjYTkzUFeZWg==: 00:13:23.362 19:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.362 19:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:23.362 19:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.362 19:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.362 19:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.362 19:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:23.362 19:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:23.362 19:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:23.619 19:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:13:23.619 19:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:23.619 19:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:23.619 19:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:23.619 19:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:23.619 19:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.619 19:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.619 19:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.619 19:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.619 19:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.619 19:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.619 19:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.551 00:13:24.551 19:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:24.551 19:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.551 19:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:24.808 19:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.808 19:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.808 19:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.808 19:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.808 19:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.808 19:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:24.808 { 00:13:24.808 "auth": { 00:13:24.808 "dhgroup": "ffdhe8192", 00:13:24.808 "digest": "sha384", 00:13:24.808 "state": "completed" 00:13:24.808 }, 00:13:24.808 "cntlid": 93, 00:13:24.808 "listen_address": { 00:13:24.808 "adrfam": "IPv4", 00:13:24.808 "traddr": "10.0.0.2", 00:13:24.808 "trsvcid": "4420", 00:13:24.808 "trtype": "TCP" 00:13:24.808 }, 00:13:24.808 "peer_address": { 00:13:24.808 "adrfam": "IPv4", 00:13:24.808 "traddr": "10.0.0.1", 00:13:24.808 "trsvcid": "41922", 00:13:24.808 "trtype": "TCP" 00:13:24.808 }, 00:13:24.808 "qid": 0, 00:13:24.808 "state": "enabled", 00:13:24.808 "thread": "nvmf_tgt_poll_group_000" 00:13:24.808 } 00:13:24.808 ]' 00:13:24.808 19:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:24.808 19:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:24.808 19:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:24.808 19:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:24.808 19:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:24.808 19:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.808 19:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.808 19:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.066 19:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:02:NzQzNmExZGM4NTFlOTFlMmJkZmI4MjU5NDJkMzZjMTE2MDUyNmMwNmY2NjRkNjlkmucOkA==: --dhchap-ctrl-secret DHHC-1:01:ODdjZTM4ZmE2MzlhMDliNzIwYjNlY2RhNGY2MWMwMDH5v6We: 00:13:25.999 19:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.999 19:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:25.999 19:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.999 19:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.999 19:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.999 19:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:25.999 19:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:25.999 19:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:26.257 19:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:13:26.257 19:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:26.257 19:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:26.257 19:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:26.257 19:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:26.257 19:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.257 19:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:13:26.257 19:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.257 19:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.257 19:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.257 19:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:26.257 19:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:26.845 00:13:26.845 19:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:26.845 19:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.845 19:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:27.103 19:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.103 19:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.103 19:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.103 19:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.103 19:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.103 19:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:27.103 { 00:13:27.103 "auth": { 00:13:27.103 "dhgroup": "ffdhe8192", 00:13:27.103 "digest": "sha384", 00:13:27.103 "state": "completed" 00:13:27.103 }, 00:13:27.103 "cntlid": 95, 00:13:27.103 "listen_address": { 00:13:27.103 "adrfam": "IPv4", 00:13:27.103 "traddr": "10.0.0.2", 00:13:27.103 "trsvcid": "4420", 00:13:27.103 "trtype": "TCP" 00:13:27.103 }, 00:13:27.103 "peer_address": { 00:13:27.103 "adrfam": "IPv4", 00:13:27.103 "traddr": "10.0.0.1", 00:13:27.103 "trsvcid": "41952", 00:13:27.103 "trtype": "TCP" 00:13:27.103 }, 00:13:27.103 "qid": 0, 00:13:27.103 "state": "enabled", 00:13:27.103 "thread": "nvmf_tgt_poll_group_000" 00:13:27.103 } 00:13:27.103 ]' 00:13:27.103 19:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:27.362 19:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:27.362 19:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:27.362 19:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:27.362 19:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:27.362 19:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.362 19:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.362 19:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.620 19:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:03:Mjk1YmIzOTA5YzU1MGMyMTE3M2RkYjg2MzJmOGZmM2IwZDNlMWY0NjcyNjYyYTg5ZjVjZThkYWRkNDdiYjQzNHvEqS8=: 00:13:28.557 19:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.557 19:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:28.557 19:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.557 19:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.557 19:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.557 19:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:28.557 19:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:28.557 19:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:28.557 19:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:28.557 19:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:28.817 19:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:13:28.817 19:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:28.817 19:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:28.817 19:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:28.817 19:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:28.817 19:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.817 19:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.817 19:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.817 19:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.817 19:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.817 19:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.817 19:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.384 00:13:29.384 19:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:29.384 19:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.384 19:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:29.643 19:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.643 19:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.643 19:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.643 19:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.643 19:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.643 19:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:29.643 { 00:13:29.643 "auth": { 00:13:29.643 "dhgroup": "null", 00:13:29.643 "digest": "sha512", 00:13:29.643 "state": "completed" 00:13:29.643 }, 00:13:29.643 "cntlid": 97, 00:13:29.643 "listen_address": { 00:13:29.643 "adrfam": "IPv4", 00:13:29.643 "traddr": "10.0.0.2", 00:13:29.643 "trsvcid": "4420", 00:13:29.643 "trtype": "TCP" 00:13:29.643 }, 00:13:29.643 "peer_address": { 00:13:29.643 "adrfam": "IPv4", 00:13:29.643 "traddr": "10.0.0.1", 00:13:29.643 "trsvcid": "34512", 00:13:29.643 "trtype": "TCP" 00:13:29.643 }, 00:13:29.643 "qid": 0, 00:13:29.643 "state": "enabled", 00:13:29.643 "thread": "nvmf_tgt_poll_group_000" 00:13:29.643 } 00:13:29.643 ]' 00:13:29.643 19:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:29.643 19:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:29.643 19:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:29.643 19:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:29.643 19:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:29.643 19:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.643 19:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.643 19:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.901 19:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:00:MzUyMzVhYjY5NmJkODhiMTlkZTZiYzY5YmM5MTJlNTNmOTYyYzVlMTM4ODgzMDBl0qszYA==: --dhchap-ctrl-secret DHHC-1:03:NjcwMmM0ODAxNTNkYWRhNzRmMGU4MWVmN2E4MmMxY2MwOWJhMjY3ODY0ZGVmNTA2NzEzNDMwMTk5ZWYwODZmMxfCLFY=: 00:13:30.836 19:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.836 19:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:30.836 19:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.836 19:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.836 19:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.836 19:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:30.836 19:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:30.836 19:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:31.095 19:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:13:31.095 19:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:31.095 19:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:31.095 19:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:31.095 19:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:31.095 19:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.095 19:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.095 19:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.095 19:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.095 19:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.095 19:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.095 19:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.353 00:13:31.353 19:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:31.353 19:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:31.353 19:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.630 19:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.630 19:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.630 19:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.630 19:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.630 19:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.630 19:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:31.630 { 00:13:31.630 "auth": { 00:13:31.630 "dhgroup": "null", 00:13:31.630 "digest": "sha512", 00:13:31.630 "state": "completed" 00:13:31.630 }, 00:13:31.630 "cntlid": 99, 00:13:31.630 "listen_address": { 00:13:31.630 "adrfam": "IPv4", 00:13:31.630 "traddr": "10.0.0.2", 00:13:31.630 "trsvcid": "4420", 00:13:31.630 "trtype": "TCP" 00:13:31.630 }, 00:13:31.630 "peer_address": { 00:13:31.630 "adrfam": "IPv4", 00:13:31.630 "traddr": "10.0.0.1", 00:13:31.630 "trsvcid": "34532", 00:13:31.630 "trtype": "TCP" 00:13:31.630 }, 00:13:31.630 "qid": 0, 00:13:31.630 "state": "enabled", 00:13:31.630 "thread": "nvmf_tgt_poll_group_000" 00:13:31.630 } 00:13:31.630 ]' 00:13:31.630 19:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:31.630 19:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.630 19:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:31.888 19:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:31.888 19:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:31.888 19:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.888 19:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.888 19:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.147 19:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:01:NDhiZDBhM2FmYzE2YmYzMTE3YTk0ODRlNDQ2NDg0NDa9Ifhg: --dhchap-ctrl-secret DHHC-1:02:MTg1MGU1MGNiZTc1YjYyZDA5ODE4ZjU3OGUyYWNjNGQyYWM5YzVhMmUyYjVjYTkzUFeZWg==: 00:13:32.713 19:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.713 19:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:32.713 19:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.713 19:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.971 19:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.971 19:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:32.971 19:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:32.971 19:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:33.229 19:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:13:33.229 19:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:33.229 19:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:33.229 19:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:33.229 19:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:33.229 19:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.229 19:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.229 19:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.229 19:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.229 19:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.230 19:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.230 19:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.488 00:13:33.488 19:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:33.488 19:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.488 19:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:33.745 19:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.745 19:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.745 19:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.745 19:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.745 19:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.746 19:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:33.746 { 00:13:33.746 "auth": { 00:13:33.746 "dhgroup": "null", 00:13:33.746 "digest": "sha512", 00:13:33.746 "state": "completed" 00:13:33.746 }, 00:13:33.746 "cntlid": 101, 00:13:33.746 "listen_address": { 00:13:33.746 "adrfam": "IPv4", 00:13:33.746 "traddr": "10.0.0.2", 00:13:33.746 "trsvcid": "4420", 00:13:33.746 "trtype": "TCP" 00:13:33.746 }, 00:13:33.746 "peer_address": { 00:13:33.746 "adrfam": "IPv4", 00:13:33.746 "traddr": "10.0.0.1", 00:13:33.746 "trsvcid": "34554", 00:13:33.746 "trtype": "TCP" 00:13:33.746 }, 00:13:33.746 "qid": 0, 00:13:33.746 "state": "enabled", 00:13:33.746 "thread": "nvmf_tgt_poll_group_000" 00:13:33.746 } 00:13:33.746 ]' 00:13:33.746 19:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:33.746 19:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:33.746 19:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:33.746 19:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:33.746 19:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:34.004 19:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.004 19:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.004 19:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.262 19:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:02:NzQzNmExZGM4NTFlOTFlMmJkZmI4MjU5NDJkMzZjMTE2MDUyNmMwNmY2NjRkNjlkmucOkA==: --dhchap-ctrl-secret DHHC-1:01:ODdjZTM4ZmE2MzlhMDliNzIwYjNlY2RhNGY2MWMwMDH5v6We: 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:35.197 19:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:35.762 00:13:35.762 19:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:35.762 19:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.762 19:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:36.020 19:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.020 19:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.020 19:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.020 19:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.020 19:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.020 19:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:36.020 { 00:13:36.020 "auth": { 00:13:36.020 "dhgroup": "null", 00:13:36.020 "digest": "sha512", 00:13:36.020 "state": "completed" 00:13:36.020 }, 00:13:36.020 "cntlid": 103, 00:13:36.020 "listen_address": { 00:13:36.020 "adrfam": "IPv4", 00:13:36.020 "traddr": "10.0.0.2", 00:13:36.020 "trsvcid": "4420", 00:13:36.020 "trtype": "TCP" 00:13:36.020 }, 00:13:36.020 "peer_address": { 00:13:36.020 "adrfam": "IPv4", 00:13:36.020 "traddr": "10.0.0.1", 00:13:36.020 "trsvcid": "34586", 00:13:36.020 "trtype": "TCP" 00:13:36.020 }, 00:13:36.020 "qid": 0, 00:13:36.020 "state": "enabled", 00:13:36.020 "thread": "nvmf_tgt_poll_group_000" 00:13:36.020 } 00:13:36.020 ]' 00:13:36.020 19:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:36.020 19:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:36.020 19:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:36.020 19:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:36.020 19:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:36.020 19:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.020 19:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.020 19:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.278 19:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:03:Mjk1YmIzOTA5YzU1MGMyMTE3M2RkYjg2MzJmOGZmM2IwZDNlMWY0NjcyNjYyYTg5ZjVjZThkYWRkNDdiYjQzNHvEqS8=: 00:13:37.213 19:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.213 19:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:37.213 19:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.213 19:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.213 19:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.213 19:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:37.213 19:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:37.213 19:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:37.213 19:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:37.474 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:13:37.474 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:37.474 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:37.474 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:37.474 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:37.474 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.474 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.474 19:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.474 19:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.474 19:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.474 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.474 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.733 00:13:37.733 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:37.733 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.733 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:38.299 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.299 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.299 19:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.299 19:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.299 19:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.299 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:38.299 { 00:13:38.299 "auth": { 00:13:38.299 "dhgroup": "ffdhe2048", 00:13:38.299 "digest": "sha512", 00:13:38.299 "state": "completed" 00:13:38.299 }, 00:13:38.299 "cntlid": 105, 00:13:38.299 "listen_address": { 00:13:38.299 "adrfam": "IPv4", 00:13:38.299 "traddr": "10.0.0.2", 00:13:38.299 "trsvcid": "4420", 00:13:38.299 "trtype": "TCP" 00:13:38.299 }, 00:13:38.299 "peer_address": { 00:13:38.299 "adrfam": "IPv4", 00:13:38.299 "traddr": "10.0.0.1", 00:13:38.299 "trsvcid": "44328", 00:13:38.299 "trtype": "TCP" 00:13:38.299 }, 00:13:38.299 "qid": 0, 00:13:38.299 "state": "enabled", 00:13:38.299 "thread": "nvmf_tgt_poll_group_000" 00:13:38.299 } 00:13:38.299 ]' 00:13:38.299 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:38.299 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:38.299 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:38.299 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:38.299 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:38.299 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.299 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.299 19:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.558 19:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:00:MzUyMzVhYjY5NmJkODhiMTlkZTZiYzY5YmM5MTJlNTNmOTYyYzVlMTM4ODgzMDBl0qszYA==: --dhchap-ctrl-secret DHHC-1:03:NjcwMmM0ODAxNTNkYWRhNzRmMGU4MWVmN2E4MmMxY2MwOWJhMjY3ODY0ZGVmNTA2NzEzNDMwMTk5ZWYwODZmMxfCLFY=: 00:13:39.513 19:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.513 19:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:39.513 19:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.513 19:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.513 19:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.513 19:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:39.513 19:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:39.513 19:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:39.772 19:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:13:39.772 19:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:39.772 19:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:39.772 19:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:39.772 19:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:39.772 19:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.772 19:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.772 19:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.772 19:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.772 19:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.772 19:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.772 19:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.031 00:13:40.031 19:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:40.031 19:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.031 19:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:40.289 19:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.289 19:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.289 19:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.289 19:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.289 19:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.289 19:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:40.289 { 00:13:40.289 "auth": { 00:13:40.289 "dhgroup": "ffdhe2048", 00:13:40.289 "digest": "sha512", 00:13:40.289 "state": "completed" 00:13:40.289 }, 00:13:40.289 "cntlid": 107, 00:13:40.289 "listen_address": { 00:13:40.289 "adrfam": "IPv4", 00:13:40.289 "traddr": "10.0.0.2", 00:13:40.289 "trsvcid": "4420", 00:13:40.289 "trtype": "TCP" 00:13:40.289 }, 00:13:40.289 "peer_address": { 00:13:40.289 "adrfam": "IPv4", 00:13:40.289 "traddr": "10.0.0.1", 00:13:40.289 "trsvcid": "44360", 00:13:40.289 "trtype": "TCP" 00:13:40.289 }, 00:13:40.289 "qid": 0, 00:13:40.289 "state": "enabled", 00:13:40.289 "thread": "nvmf_tgt_poll_group_000" 00:13:40.289 } 00:13:40.289 ]' 00:13:40.289 19:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:40.547 19:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:40.547 19:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:40.547 19:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:40.547 19:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:40.547 19:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.547 19:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.547 19:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.805 19:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:01:NDhiZDBhM2FmYzE2YmYzMTE3YTk0ODRlNDQ2NDg0NDa9Ifhg: --dhchap-ctrl-secret DHHC-1:02:MTg1MGU1MGNiZTc1YjYyZDA5ODE4ZjU3OGUyYWNjNGQyYWM5YzVhMmUyYjVjYTkzUFeZWg==: 00:13:41.738 19:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.738 19:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:41.738 19:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.738 19:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.738 19:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.738 19:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:41.738 19:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:41.738 19:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:41.996 19:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:13:41.996 19:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:41.996 19:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:41.996 19:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:41.996 19:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:41.996 19:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.996 19:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.996 19:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.996 19:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.996 19:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.996 19:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.996 19:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.254 00:13:42.512 19:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:42.512 19:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:42.512 19:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.770 19:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.770 19:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.770 19:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.770 19:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.770 19:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.770 19:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:42.770 { 00:13:42.770 "auth": { 00:13:42.770 "dhgroup": "ffdhe2048", 00:13:42.770 "digest": "sha512", 00:13:42.770 "state": "completed" 00:13:42.770 }, 00:13:42.770 "cntlid": 109, 00:13:42.770 "listen_address": { 00:13:42.770 "adrfam": "IPv4", 00:13:42.770 "traddr": "10.0.0.2", 00:13:42.770 "trsvcid": "4420", 00:13:42.770 "trtype": "TCP" 00:13:42.770 }, 00:13:42.770 "peer_address": { 00:13:42.770 "adrfam": "IPv4", 00:13:42.770 "traddr": "10.0.0.1", 00:13:42.770 "trsvcid": "44382", 00:13:42.770 "trtype": "TCP" 00:13:42.770 }, 00:13:42.770 "qid": 0, 00:13:42.770 "state": "enabled", 00:13:42.770 "thread": "nvmf_tgt_poll_group_000" 00:13:42.770 } 00:13:42.770 ]' 00:13:42.770 19:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:42.770 19:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:42.770 19:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:42.770 19:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:42.770 19:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:42.770 19:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.770 19:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.770 19:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.335 19:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:02:NzQzNmExZGM4NTFlOTFlMmJkZmI4MjU5NDJkMzZjMTE2MDUyNmMwNmY2NjRkNjlkmucOkA==: --dhchap-ctrl-secret DHHC-1:01:ODdjZTM4ZmE2MzlhMDliNzIwYjNlY2RhNGY2MWMwMDH5v6We: 00:13:43.913 19:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.913 19:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:43.913 19:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.913 19:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.913 19:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.913 19:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:43.913 19:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:43.913 19:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:44.170 19:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:13:44.170 19:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:44.170 19:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:44.170 19:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:44.171 19:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:44.171 19:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.171 19:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:13:44.171 19:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.171 19:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.171 19:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.171 19:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:44.171 19:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:44.734 00:13:44.734 19:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:44.734 19:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.734 19:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:44.992 19:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.992 19:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.992 19:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.992 19:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.992 19:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.992 19:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:44.992 { 00:13:44.992 "auth": { 00:13:44.992 "dhgroup": "ffdhe2048", 00:13:44.992 "digest": "sha512", 00:13:44.992 "state": "completed" 00:13:44.992 }, 00:13:44.992 "cntlid": 111, 00:13:44.992 "listen_address": { 00:13:44.992 "adrfam": "IPv4", 00:13:44.992 "traddr": "10.0.0.2", 00:13:44.992 "trsvcid": "4420", 00:13:44.992 "trtype": "TCP" 00:13:44.992 }, 00:13:44.992 "peer_address": { 00:13:44.992 "adrfam": "IPv4", 00:13:44.992 "traddr": "10.0.0.1", 00:13:44.992 "trsvcid": "44410", 00:13:44.992 "trtype": "TCP" 00:13:44.992 }, 00:13:44.992 "qid": 0, 00:13:44.992 "state": "enabled", 00:13:44.992 "thread": "nvmf_tgt_poll_group_000" 00:13:44.992 } 00:13:44.992 ]' 00:13:44.992 19:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:44.992 19:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:44.992 19:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:44.992 19:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:44.992 19:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:44.992 19:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.992 19:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.992 19:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.559 19:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:03:Mjk1YmIzOTA5YzU1MGMyMTE3M2RkYjg2MzJmOGZmM2IwZDNlMWY0NjcyNjYyYTg5ZjVjZThkYWRkNDdiYjQzNHvEqS8=: 00:13:46.126 19:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.126 19:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:46.126 19:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.126 19:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.126 19:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.126 19:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:46.126 19:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:46.126 19:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:46.126 19:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:46.384 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:13:46.384 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:46.384 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:46.384 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:46.384 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:46.384 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.384 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.384 19:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.384 19:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.384 19:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.384 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.384 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.960 00:13:46.960 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:46.960 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:46.960 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.218 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.218 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.218 19:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.218 19:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.218 19:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.218 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:47.218 { 00:13:47.218 "auth": { 00:13:47.218 "dhgroup": "ffdhe3072", 00:13:47.218 "digest": "sha512", 00:13:47.218 "state": "completed" 00:13:47.218 }, 00:13:47.218 "cntlid": 113, 00:13:47.218 "listen_address": { 00:13:47.218 "adrfam": "IPv4", 00:13:47.218 "traddr": "10.0.0.2", 00:13:47.218 "trsvcid": "4420", 00:13:47.218 "trtype": "TCP" 00:13:47.218 }, 00:13:47.218 "peer_address": { 00:13:47.218 "adrfam": "IPv4", 00:13:47.218 "traddr": "10.0.0.1", 00:13:47.218 "trsvcid": "44762", 00:13:47.218 "trtype": "TCP" 00:13:47.218 }, 00:13:47.218 "qid": 0, 00:13:47.218 "state": "enabled", 00:13:47.218 "thread": "nvmf_tgt_poll_group_000" 00:13:47.218 } 00:13:47.218 ]' 00:13:47.218 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:47.218 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:47.218 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:47.218 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:47.218 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:47.218 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.218 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.218 19:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.477 19:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:00:MzUyMzVhYjY5NmJkODhiMTlkZTZiYzY5YmM5MTJlNTNmOTYyYzVlMTM4ODgzMDBl0qszYA==: --dhchap-ctrl-secret DHHC-1:03:NjcwMmM0ODAxNTNkYWRhNzRmMGU4MWVmN2E4MmMxY2MwOWJhMjY3ODY0ZGVmNTA2NzEzNDMwMTk5ZWYwODZmMxfCLFY=: 00:13:48.414 19:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.414 19:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:48.414 19:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.414 19:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.414 19:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.414 19:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:48.414 19:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:48.414 19:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:48.673 19:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:13:48.673 19:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:48.673 19:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:48.673 19:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:48.673 19:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:48.673 19:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.673 19:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.673 19:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.673 19:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.673 19:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.673 19:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.673 19:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.239 00:13:49.239 19:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:49.239 19:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:49.239 19:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.498 19:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.498 19:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.498 19:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.498 19:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.498 19:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.498 19:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:49.498 { 00:13:49.498 "auth": { 00:13:49.498 "dhgroup": "ffdhe3072", 00:13:49.498 "digest": "sha512", 00:13:49.498 "state": "completed" 00:13:49.498 }, 00:13:49.498 "cntlid": 115, 00:13:49.498 "listen_address": { 00:13:49.498 "adrfam": "IPv4", 00:13:49.498 "traddr": "10.0.0.2", 00:13:49.498 "trsvcid": "4420", 00:13:49.498 "trtype": "TCP" 00:13:49.498 }, 00:13:49.498 "peer_address": { 00:13:49.498 "adrfam": "IPv4", 00:13:49.498 "traddr": "10.0.0.1", 00:13:49.498 "trsvcid": "44802", 00:13:49.498 "trtype": "TCP" 00:13:49.498 }, 00:13:49.498 "qid": 0, 00:13:49.498 "state": "enabled", 00:13:49.498 "thread": "nvmf_tgt_poll_group_000" 00:13:49.498 } 00:13:49.498 ]' 00:13:49.498 19:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:49.498 19:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:49.498 19:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:49.498 19:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:49.498 19:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:49.498 19:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.498 19:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.498 19:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.064 19:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:01:NDhiZDBhM2FmYzE2YmYzMTE3YTk0ODRlNDQ2NDg0NDa9Ifhg: --dhchap-ctrl-secret DHHC-1:02:MTg1MGU1MGNiZTc1YjYyZDA5ODE4ZjU3OGUyYWNjNGQyYWM5YzVhMmUyYjVjYTkzUFeZWg==: 00:13:50.631 19:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.631 19:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:50.631 19:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.631 19:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.631 19:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.631 19:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:50.631 19:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:50.631 19:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:50.890 19:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:13:50.890 19:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:50.890 19:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:50.890 19:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:50.890 19:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:50.890 19:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.890 19:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.890 19:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.890 19:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.890 19:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.890 19:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.890 19:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.457 00:13:51.457 19:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:51.457 19:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.457 19:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:51.715 19:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.715 19:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.715 19:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.715 19:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.715 19:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.715 19:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:51.715 { 00:13:51.715 "auth": { 00:13:51.715 "dhgroup": "ffdhe3072", 00:13:51.715 "digest": "sha512", 00:13:51.715 "state": "completed" 00:13:51.715 }, 00:13:51.715 "cntlid": 117, 00:13:51.715 "listen_address": { 00:13:51.715 "adrfam": "IPv4", 00:13:51.715 "traddr": "10.0.0.2", 00:13:51.715 "trsvcid": "4420", 00:13:51.715 "trtype": "TCP" 00:13:51.715 }, 00:13:51.715 "peer_address": { 00:13:51.715 "adrfam": "IPv4", 00:13:51.715 "traddr": "10.0.0.1", 00:13:51.715 "trsvcid": "44824", 00:13:51.715 "trtype": "TCP" 00:13:51.715 }, 00:13:51.715 "qid": 0, 00:13:51.715 "state": "enabled", 00:13:51.715 "thread": "nvmf_tgt_poll_group_000" 00:13:51.715 } 00:13:51.715 ]' 00:13:51.715 19:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:51.715 19:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:51.715 19:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:51.715 19:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:51.715 19:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:51.715 19:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.715 19:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.715 19:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.283 19:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:02:NzQzNmExZGM4NTFlOTFlMmJkZmI4MjU5NDJkMzZjMTE2MDUyNmMwNmY2NjRkNjlkmucOkA==: --dhchap-ctrl-secret DHHC-1:01:ODdjZTM4ZmE2MzlhMDliNzIwYjNlY2RhNGY2MWMwMDH5v6We: 00:13:52.888 19:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.888 19:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:52.888 19:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.888 19:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.888 19:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.888 19:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:52.888 19:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:52.888 19:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:53.147 19:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:13:53.147 19:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:53.147 19:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:53.147 19:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:53.147 19:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:53.147 19:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.147 19:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:13:53.147 19:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.147 19:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.147 19:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.147 19:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:53.147 19:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:53.713 00:13:53.713 19:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:53.713 19:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:53.713 19:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.971 19:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.971 19:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.971 19:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.971 19:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.971 19:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.971 19:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:53.971 { 00:13:53.971 "auth": { 00:13:53.971 "dhgroup": "ffdhe3072", 00:13:53.971 "digest": "sha512", 00:13:53.971 "state": "completed" 00:13:53.971 }, 00:13:53.971 "cntlid": 119, 00:13:53.971 "listen_address": { 00:13:53.971 "adrfam": "IPv4", 00:13:53.971 "traddr": "10.0.0.2", 00:13:53.971 "trsvcid": "4420", 00:13:53.971 "trtype": "TCP" 00:13:53.971 }, 00:13:53.971 "peer_address": { 00:13:53.971 "adrfam": "IPv4", 00:13:53.971 "traddr": "10.0.0.1", 00:13:53.971 "trsvcid": "44850", 00:13:53.971 "trtype": "TCP" 00:13:53.971 }, 00:13:53.971 "qid": 0, 00:13:53.971 "state": "enabled", 00:13:53.971 "thread": "nvmf_tgt_poll_group_000" 00:13:53.971 } 00:13:53.971 ]' 00:13:53.971 19:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:53.971 19:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:53.971 19:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:53.971 19:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:53.971 19:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:53.971 19:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.971 19:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.971 19:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.534 19:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:03:Mjk1YmIzOTA5YzU1MGMyMTE3M2RkYjg2MzJmOGZmM2IwZDNlMWY0NjcyNjYyYTg5ZjVjZThkYWRkNDdiYjQzNHvEqS8=: 00:13:55.100 19:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.100 19:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:55.100 19:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.100 19:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.100 19:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.100 19:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:55.100 19:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:55.100 19:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:55.100 19:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:55.665 19:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:13:55.665 19:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:55.665 19:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:55.665 19:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:55.665 19:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:55.665 19:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.665 19:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.665 19:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.665 19:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.665 19:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.665 19:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.665 19:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.922 00:13:55.922 19:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:55.922 19:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:55.922 19:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.180 19:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.180 19:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.180 19:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.180 19:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.180 19:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.180 19:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:56.180 { 00:13:56.180 "auth": { 00:13:56.180 "dhgroup": "ffdhe4096", 00:13:56.180 "digest": "sha512", 00:13:56.180 "state": "completed" 00:13:56.180 }, 00:13:56.180 "cntlid": 121, 00:13:56.180 "listen_address": { 00:13:56.180 "adrfam": "IPv4", 00:13:56.180 "traddr": "10.0.0.2", 00:13:56.180 "trsvcid": "4420", 00:13:56.180 "trtype": "TCP" 00:13:56.180 }, 00:13:56.180 "peer_address": { 00:13:56.180 "adrfam": "IPv4", 00:13:56.180 "traddr": "10.0.0.1", 00:13:56.180 "trsvcid": "44866", 00:13:56.180 "trtype": "TCP" 00:13:56.180 }, 00:13:56.180 "qid": 0, 00:13:56.180 "state": "enabled", 00:13:56.180 "thread": "nvmf_tgt_poll_group_000" 00:13:56.180 } 00:13:56.180 ]' 00:13:56.180 19:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:56.437 19:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:56.437 19:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:56.438 19:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:56.438 19:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:56.438 19:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.438 19:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.438 19:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.695 19:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:00:MzUyMzVhYjY5NmJkODhiMTlkZTZiYzY5YmM5MTJlNTNmOTYyYzVlMTM4ODgzMDBl0qszYA==: --dhchap-ctrl-secret DHHC-1:03:NjcwMmM0ODAxNTNkYWRhNzRmMGU4MWVmN2E4MmMxY2MwOWJhMjY3ODY0ZGVmNTA2NzEzNDMwMTk5ZWYwODZmMxfCLFY=: 00:13:57.629 19:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.629 19:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:57.629 19:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.629 19:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.629 19:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.629 19:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:57.629 19:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:57.629 19:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:57.888 19:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:13:57.888 19:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:57.888 19:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:57.888 19:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:57.888 19:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:57.888 19:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.888 19:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.888 19:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.888 19:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.888 19:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.888 19:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.888 19:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.454 00:13:58.454 19:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:58.454 19:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:58.454 19:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.712 19:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.712 19:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.712 19:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.712 19:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.712 19:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.712 19:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:58.712 { 00:13:58.713 "auth": { 00:13:58.713 "dhgroup": "ffdhe4096", 00:13:58.713 "digest": "sha512", 00:13:58.713 "state": "completed" 00:13:58.713 }, 00:13:58.713 "cntlid": 123, 00:13:58.713 "listen_address": { 00:13:58.713 "adrfam": "IPv4", 00:13:58.713 "traddr": "10.0.0.2", 00:13:58.713 "trsvcid": "4420", 00:13:58.713 "trtype": "TCP" 00:13:58.713 }, 00:13:58.713 "peer_address": { 00:13:58.713 "adrfam": "IPv4", 00:13:58.713 "traddr": "10.0.0.1", 00:13:58.713 "trsvcid": "60892", 00:13:58.713 "trtype": "TCP" 00:13:58.713 }, 00:13:58.713 "qid": 0, 00:13:58.713 "state": "enabled", 00:13:58.713 "thread": "nvmf_tgt_poll_group_000" 00:13:58.713 } 00:13:58.713 ]' 00:13:58.713 19:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:58.713 19:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:58.713 19:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:58.713 19:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:58.713 19:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:58.713 19:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.713 19:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.713 19:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.970 19:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:01:NDhiZDBhM2FmYzE2YmYzMTE3YTk0ODRlNDQ2NDg0NDa9Ifhg: --dhchap-ctrl-secret DHHC-1:02:MTg1MGU1MGNiZTc1YjYyZDA5ODE4ZjU3OGUyYWNjNGQyYWM5YzVhMmUyYjVjYTkzUFeZWg==: 00:13:59.904 19:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.904 19:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:13:59.904 19:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.904 19:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.904 19:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.904 19:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:59.904 19:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:59.904 19:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:00.162 19:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:14:00.162 19:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:00.162 19:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:00.162 19:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:00.162 19:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:00.162 19:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.162 19:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.162 19:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.162 19:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.162 19:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.162 19:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.162 19:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.420 00:14:00.420 19:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:00.420 19:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.420 19:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:00.677 19:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.677 19:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.677 19:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.677 19:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.677 19:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.677 19:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:00.677 { 00:14:00.677 "auth": { 00:14:00.677 "dhgroup": "ffdhe4096", 00:14:00.677 "digest": "sha512", 00:14:00.677 "state": "completed" 00:14:00.677 }, 00:14:00.677 "cntlid": 125, 00:14:00.677 "listen_address": { 00:14:00.677 "adrfam": "IPv4", 00:14:00.677 "traddr": "10.0.0.2", 00:14:00.677 "trsvcid": "4420", 00:14:00.677 "trtype": "TCP" 00:14:00.677 }, 00:14:00.677 "peer_address": { 00:14:00.677 "adrfam": "IPv4", 00:14:00.677 "traddr": "10.0.0.1", 00:14:00.677 "trsvcid": "60926", 00:14:00.677 "trtype": "TCP" 00:14:00.677 }, 00:14:00.677 "qid": 0, 00:14:00.677 "state": "enabled", 00:14:00.677 "thread": "nvmf_tgt_poll_group_000" 00:14:00.677 } 00:14:00.677 ]' 00:14:00.677 19:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:00.934 19:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:00.934 19:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:00.934 19:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:00.934 19:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:00.934 19:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.934 19:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.934 19:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.191 19:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:02:NzQzNmExZGM4NTFlOTFlMmJkZmI4MjU5NDJkMzZjMTE2MDUyNmMwNmY2NjRkNjlkmucOkA==: --dhchap-ctrl-secret DHHC-1:01:ODdjZTM4ZmE2MzlhMDliNzIwYjNlY2RhNGY2MWMwMDH5v6We: 00:14:02.123 19:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.123 19:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:02.123 19:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.123 19:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.123 19:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.123 19:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:02.123 19:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:02.123 19:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:02.689 19:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:14:02.689 19:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:02.689 19:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:02.689 19:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:02.689 19:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:02.689 19:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.689 19:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:14:02.689 19:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.689 19:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.689 19:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.689 19:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:02.689 19:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:02.946 00:14:02.946 19:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:02.946 19:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.946 19:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:03.204 19:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.204 19:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.204 19:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.204 19:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.204 19:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.204 19:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:03.204 { 00:14:03.204 "auth": { 00:14:03.204 "dhgroup": "ffdhe4096", 00:14:03.204 "digest": "sha512", 00:14:03.204 "state": "completed" 00:14:03.204 }, 00:14:03.204 "cntlid": 127, 00:14:03.204 "listen_address": { 00:14:03.204 "adrfam": "IPv4", 00:14:03.204 "traddr": "10.0.0.2", 00:14:03.204 "trsvcid": "4420", 00:14:03.204 "trtype": "TCP" 00:14:03.204 }, 00:14:03.204 "peer_address": { 00:14:03.204 "adrfam": "IPv4", 00:14:03.204 "traddr": "10.0.0.1", 00:14:03.204 "trsvcid": "60948", 00:14:03.204 "trtype": "TCP" 00:14:03.204 }, 00:14:03.204 "qid": 0, 00:14:03.204 "state": "enabled", 00:14:03.204 "thread": "nvmf_tgt_poll_group_000" 00:14:03.204 } 00:14:03.204 ]' 00:14:03.204 19:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:03.204 19:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:03.204 19:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:03.204 19:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:03.204 19:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:03.462 19:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.462 19:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.462 19:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.718 19:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:03:Mjk1YmIzOTA5YzU1MGMyMTE3M2RkYjg2MzJmOGZmM2IwZDNlMWY0NjcyNjYyYTg5ZjVjZThkYWRkNDdiYjQzNHvEqS8=: 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.652 19:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.219 00:14:05.219 19:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:05.219 19:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.219 19:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:05.477 19:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.477 19:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.477 19:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.477 19:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.477 19:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.477 19:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:05.477 { 00:14:05.477 "auth": { 00:14:05.477 "dhgroup": "ffdhe6144", 00:14:05.477 "digest": "sha512", 00:14:05.477 "state": "completed" 00:14:05.477 }, 00:14:05.477 "cntlid": 129, 00:14:05.477 "listen_address": { 00:14:05.477 "adrfam": "IPv4", 00:14:05.477 "traddr": "10.0.0.2", 00:14:05.477 "trsvcid": "4420", 00:14:05.477 "trtype": "TCP" 00:14:05.477 }, 00:14:05.477 "peer_address": { 00:14:05.477 "adrfam": "IPv4", 00:14:05.477 "traddr": "10.0.0.1", 00:14:05.477 "trsvcid": "60974", 00:14:05.477 "trtype": "TCP" 00:14:05.477 }, 00:14:05.477 "qid": 0, 00:14:05.477 "state": "enabled", 00:14:05.477 "thread": "nvmf_tgt_poll_group_000" 00:14:05.477 } 00:14:05.477 ]' 00:14:05.477 19:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:05.735 19:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:05.735 19:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:05.735 19:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:05.735 19:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:05.735 19:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.735 19:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.735 19:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.993 19:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:00:MzUyMzVhYjY5NmJkODhiMTlkZTZiYzY5YmM5MTJlNTNmOTYyYzVlMTM4ODgzMDBl0qszYA==: --dhchap-ctrl-secret DHHC-1:03:NjcwMmM0ODAxNTNkYWRhNzRmMGU4MWVmN2E4MmMxY2MwOWJhMjY3ODY0ZGVmNTA2NzEzNDMwMTk5ZWYwODZmMxfCLFY=: 00:14:06.928 19:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.928 19:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:06.928 19:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.928 19:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.929 19:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.929 19:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:06.929 19:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:06.929 19:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:07.187 19:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:14:07.187 19:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:07.187 19:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:07.187 19:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:07.187 19:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:07.187 19:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.187 19:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.187 19:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.187 19:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.187 19:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.187 19:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.187 19:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.752 00:14:07.753 19:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:07.753 19:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:07.753 19:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.011 19:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.011 19:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.011 19:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.011 19:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.011 19:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.011 19:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:08.011 { 00:14:08.011 "auth": { 00:14:08.011 "dhgroup": "ffdhe6144", 00:14:08.011 "digest": "sha512", 00:14:08.011 "state": "completed" 00:14:08.011 }, 00:14:08.011 "cntlid": 131, 00:14:08.011 "listen_address": { 00:14:08.011 "adrfam": "IPv4", 00:14:08.011 "traddr": "10.0.0.2", 00:14:08.011 "trsvcid": "4420", 00:14:08.011 "trtype": "TCP" 00:14:08.011 }, 00:14:08.011 "peer_address": { 00:14:08.011 "adrfam": "IPv4", 00:14:08.011 "traddr": "10.0.0.1", 00:14:08.011 "trsvcid": "38426", 00:14:08.011 "trtype": "TCP" 00:14:08.011 }, 00:14:08.011 "qid": 0, 00:14:08.011 "state": "enabled", 00:14:08.011 "thread": "nvmf_tgt_poll_group_000" 00:14:08.011 } 00:14:08.011 ]' 00:14:08.011 19:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:08.011 19:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:08.011 19:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:08.011 19:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:08.011 19:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:08.011 19:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.011 19:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.011 19:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.618 19:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:01:NDhiZDBhM2FmYzE2YmYzMTE3YTk0ODRlNDQ2NDg0NDa9Ifhg: --dhchap-ctrl-secret DHHC-1:02:MTg1MGU1MGNiZTc1YjYyZDA5ODE4ZjU3OGUyYWNjNGQyYWM5YzVhMmUyYjVjYTkzUFeZWg==: 00:14:09.185 19:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.185 19:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:09.185 19:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.185 19:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.185 19:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.185 19:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:09.185 19:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:09.185 19:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:09.444 19:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:14:09.444 19:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:09.444 19:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:09.444 19:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:09.444 19:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:09.444 19:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.444 19:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.444 19:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.444 19:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.444 19:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.444 19:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.444 19:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.012 00:14:10.012 19:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:10.012 19:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:10.012 19:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.580 19:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.580 19:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.580 19:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.580 19:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.580 19:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.580 19:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:10.580 { 00:14:10.580 "auth": { 00:14:10.580 "dhgroup": "ffdhe6144", 00:14:10.580 "digest": "sha512", 00:14:10.580 "state": "completed" 00:14:10.580 }, 00:14:10.580 "cntlid": 133, 00:14:10.580 "listen_address": { 00:14:10.580 "adrfam": "IPv4", 00:14:10.580 "traddr": "10.0.0.2", 00:14:10.580 "trsvcid": "4420", 00:14:10.580 "trtype": "TCP" 00:14:10.580 }, 00:14:10.580 "peer_address": { 00:14:10.580 "adrfam": "IPv4", 00:14:10.580 "traddr": "10.0.0.1", 00:14:10.580 "trsvcid": "38458", 00:14:10.580 "trtype": "TCP" 00:14:10.580 }, 00:14:10.580 "qid": 0, 00:14:10.580 "state": "enabled", 00:14:10.580 "thread": "nvmf_tgt_poll_group_000" 00:14:10.580 } 00:14:10.580 ]' 00:14:10.580 19:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:10.580 19:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:10.581 19:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:10.581 19:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:10.581 19:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:10.581 19:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.581 19:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.581 19:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.838 19:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:02:NzQzNmExZGM4NTFlOTFlMmJkZmI4MjU5NDJkMzZjMTE2MDUyNmMwNmY2NjRkNjlkmucOkA==: --dhchap-ctrl-secret DHHC-1:01:ODdjZTM4ZmE2MzlhMDliNzIwYjNlY2RhNGY2MWMwMDH5v6We: 00:14:11.793 19:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.793 19:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:11.793 19:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.793 19:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.793 19:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.793 19:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:11.793 19:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:11.793 19:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:12.050 19:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:14:12.050 19:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.050 19:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:12.050 19:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:12.050 19:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:12.050 19:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.050 19:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:14:12.050 19:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.050 19:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.050 19:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.050 19:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:12.050 19:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:12.334 00:14:12.608 19:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:12.608 19:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.608 19:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:12.867 19:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.867 19:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.867 19:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.867 19:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.867 19:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.867 19:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:12.867 { 00:14:12.867 "auth": { 00:14:12.867 "dhgroup": "ffdhe6144", 00:14:12.867 "digest": "sha512", 00:14:12.867 "state": "completed" 00:14:12.867 }, 00:14:12.867 "cntlid": 135, 00:14:12.867 "listen_address": { 00:14:12.867 "adrfam": "IPv4", 00:14:12.867 "traddr": "10.0.0.2", 00:14:12.867 "trsvcid": "4420", 00:14:12.867 "trtype": "TCP" 00:14:12.867 }, 00:14:12.867 "peer_address": { 00:14:12.867 "adrfam": "IPv4", 00:14:12.867 "traddr": "10.0.0.1", 00:14:12.867 "trsvcid": "38480", 00:14:12.867 "trtype": "TCP" 00:14:12.867 }, 00:14:12.867 "qid": 0, 00:14:12.867 "state": "enabled", 00:14:12.867 "thread": "nvmf_tgt_poll_group_000" 00:14:12.867 } 00:14:12.867 ]' 00:14:12.867 19:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:12.867 19:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:12.867 19:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:12.867 19:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:12.867 19:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:12.867 19:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.867 19:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.867 19:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.433 19:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:03:Mjk1YmIzOTA5YzU1MGMyMTE3M2RkYjg2MzJmOGZmM2IwZDNlMWY0NjcyNjYyYTg5ZjVjZThkYWRkNDdiYjQzNHvEqS8=: 00:14:13.997 19:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.997 19:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:13.997 19:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.997 19:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.997 19:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.997 19:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:13.997 19:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:13.997 19:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:13.997 19:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:14.255 19:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:14:14.255 19:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:14.255 19:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:14.255 19:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:14.255 19:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:14.255 19:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.255 19:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.255 19:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.255 19:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.255 19:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.255 19:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.255 19:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.189 00:14:15.189 19:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.189 19:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.189 19:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.447 19:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.447 19:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.447 19:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.447 19:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.447 19:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.447 19:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:15.447 { 00:14:15.447 "auth": { 00:14:15.447 "dhgroup": "ffdhe8192", 00:14:15.447 "digest": "sha512", 00:14:15.447 "state": "completed" 00:14:15.447 }, 00:14:15.447 "cntlid": 137, 00:14:15.447 "listen_address": { 00:14:15.447 "adrfam": "IPv4", 00:14:15.447 "traddr": "10.0.0.2", 00:14:15.447 "trsvcid": "4420", 00:14:15.447 "trtype": "TCP" 00:14:15.447 }, 00:14:15.447 "peer_address": { 00:14:15.447 "adrfam": "IPv4", 00:14:15.447 "traddr": "10.0.0.1", 00:14:15.447 "trsvcid": "38520", 00:14:15.447 "trtype": "TCP" 00:14:15.447 }, 00:14:15.447 "qid": 0, 00:14:15.447 "state": "enabled", 00:14:15.447 "thread": "nvmf_tgt_poll_group_000" 00:14:15.447 } 00:14:15.447 ]' 00:14:15.447 19:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:15.447 19:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:15.448 19:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:15.448 19:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:15.448 19:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.448 19:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.448 19:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.448 19:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.705 19:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:00:MzUyMzVhYjY5NmJkODhiMTlkZTZiYzY5YmM5MTJlNTNmOTYyYzVlMTM4ODgzMDBl0qszYA==: --dhchap-ctrl-secret DHHC-1:03:NjcwMmM0ODAxNTNkYWRhNzRmMGU4MWVmN2E4MmMxY2MwOWJhMjY3ODY0ZGVmNTA2NzEzNDMwMTk5ZWYwODZmMxfCLFY=: 00:14:16.640 19:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.640 19:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:16.640 19:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.640 19:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.640 19:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.640 19:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:16.640 19:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:16.640 19:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:16.898 19:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:14:16.898 19:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:16.898 19:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:16.898 19:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:16.898 19:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:16.898 19:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.898 19:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.898 19:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.898 19:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.898 19:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.898 19:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.898 19:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.465 00:14:17.465 19:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:17.465 19:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.465 19:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.032 19:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.032 19:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.032 19:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.032 19:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.032 19:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.032 19:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.032 { 00:14:18.032 "auth": { 00:14:18.032 "dhgroup": "ffdhe8192", 00:14:18.032 "digest": "sha512", 00:14:18.032 "state": "completed" 00:14:18.032 }, 00:14:18.032 "cntlid": 139, 00:14:18.032 "listen_address": { 00:14:18.032 "adrfam": "IPv4", 00:14:18.032 "traddr": "10.0.0.2", 00:14:18.032 "trsvcid": "4420", 00:14:18.032 "trtype": "TCP" 00:14:18.032 }, 00:14:18.032 "peer_address": { 00:14:18.032 "adrfam": "IPv4", 00:14:18.032 "traddr": "10.0.0.1", 00:14:18.032 "trsvcid": "37558", 00:14:18.032 "trtype": "TCP" 00:14:18.032 }, 00:14:18.032 "qid": 0, 00:14:18.032 "state": "enabled", 00:14:18.032 "thread": "nvmf_tgt_poll_group_000" 00:14:18.032 } 00:14:18.032 ]' 00:14:18.032 19:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:18.032 19:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:18.032 19:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:18.032 19:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:18.032 19:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.032 19:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.032 19:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.032 19:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.291 19:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:01:NDhiZDBhM2FmYzE2YmYzMTE3YTk0ODRlNDQ2NDg0NDa9Ifhg: --dhchap-ctrl-secret DHHC-1:02:MTg1MGU1MGNiZTc1YjYyZDA5ODE4ZjU3OGUyYWNjNGQyYWM5YzVhMmUyYjVjYTkzUFeZWg==: 00:14:19.227 19:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.227 19:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:19.227 19:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.227 19:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.227 19:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.227 19:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.227 19:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:19.227 19:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:19.485 19:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:14:19.485 19:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.485 19:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:19.485 19:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:19.485 19:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:19.485 19:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.485 19:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.485 19:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.485 19:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.485 19:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.485 19:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.485 19:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.051 00:14:20.051 19:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.051 19:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.051 19:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.310 19:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.310 19:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.310 19:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.310 19:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.310 19:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.310 19:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.310 { 00:14:20.310 "auth": { 00:14:20.310 "dhgroup": "ffdhe8192", 00:14:20.310 "digest": "sha512", 00:14:20.310 "state": "completed" 00:14:20.310 }, 00:14:20.310 "cntlid": 141, 00:14:20.310 "listen_address": { 00:14:20.310 "adrfam": "IPv4", 00:14:20.310 "traddr": "10.0.0.2", 00:14:20.310 "trsvcid": "4420", 00:14:20.310 "trtype": "TCP" 00:14:20.310 }, 00:14:20.310 "peer_address": { 00:14:20.310 "adrfam": "IPv4", 00:14:20.310 "traddr": "10.0.0.1", 00:14:20.310 "trsvcid": "37602", 00:14:20.310 "trtype": "TCP" 00:14:20.310 }, 00:14:20.310 "qid": 0, 00:14:20.310 "state": "enabled", 00:14:20.310 "thread": "nvmf_tgt_poll_group_000" 00:14:20.310 } 00:14:20.310 ]' 00:14:20.310 19:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.569 19:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:20.569 19:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.569 19:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:20.569 19:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.569 19:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.569 19:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.569 19:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.829 19:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:02:NzQzNmExZGM4NTFlOTFlMmJkZmI4MjU5NDJkMzZjMTE2MDUyNmMwNmY2NjRkNjlkmucOkA==: --dhchap-ctrl-secret DHHC-1:01:ODdjZTM4ZmE2MzlhMDliNzIwYjNlY2RhNGY2MWMwMDH5v6We: 00:14:21.812 19:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.812 19:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:21.812 19:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.812 19:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.812 19:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.812 19:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.812 19:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:21.813 19:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:22.071 19:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:14:22.071 19:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:22.071 19:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:22.071 19:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:22.071 19:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:22.071 19:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.071 19:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:14:22.071 19:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.071 19:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.071 19:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.071 19:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:22.071 19:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:22.636 00:14:22.636 19:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.636 19:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.636 19:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.894 19:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.894 19:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.894 19:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.894 19:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.894 19:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.894 19:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.894 { 00:14:22.894 "auth": { 00:14:22.894 "dhgroup": "ffdhe8192", 00:14:22.894 "digest": "sha512", 00:14:22.894 "state": "completed" 00:14:22.894 }, 00:14:22.894 "cntlid": 143, 00:14:22.894 "listen_address": { 00:14:22.894 "adrfam": "IPv4", 00:14:22.894 "traddr": "10.0.0.2", 00:14:22.894 "trsvcid": "4420", 00:14:22.894 "trtype": "TCP" 00:14:22.894 }, 00:14:22.894 "peer_address": { 00:14:22.894 "adrfam": "IPv4", 00:14:22.894 "traddr": "10.0.0.1", 00:14:22.894 "trsvcid": "37634", 00:14:22.894 "trtype": "TCP" 00:14:22.894 }, 00:14:22.894 "qid": 0, 00:14:22.894 "state": "enabled", 00:14:22.894 "thread": "nvmf_tgt_poll_group_000" 00:14:22.894 } 00:14:22.894 ]' 00:14:22.894 19:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.894 19:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:22.894 19:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:23.152 19:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:23.152 19:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:23.152 19:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.152 19:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.152 19:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.410 19:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:03:Mjk1YmIzOTA5YzU1MGMyMTE3M2RkYjg2MzJmOGZmM2IwZDNlMWY0NjcyNjYyYTg5ZjVjZThkYWRkNDdiYjQzNHvEqS8=: 00:14:24.340 19:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.341 19:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:24.341 19:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.341 19:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.341 19:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.341 19:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:14:24.341 19:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:14:24.341 19:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:14:24.341 19:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:24.341 19:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:24.341 19:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:24.341 19:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:14:24.341 19:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:24.341 19:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:24.341 19:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:24.341 19:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:24.341 19:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.341 19:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.341 19:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.341 19:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.341 19:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.341 19:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.341 19:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.273 00:14:25.273 19:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:25.273 19:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:25.273 19:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.530 19:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.530 19:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.530 19:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.530 19:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.530 19:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.530 19:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:25.530 { 00:14:25.530 "auth": { 00:14:25.530 "dhgroup": "ffdhe8192", 00:14:25.530 "digest": "sha512", 00:14:25.530 "state": "completed" 00:14:25.530 }, 00:14:25.530 "cntlid": 145, 00:14:25.530 "listen_address": { 00:14:25.530 "adrfam": "IPv4", 00:14:25.530 "traddr": "10.0.0.2", 00:14:25.530 "trsvcid": "4420", 00:14:25.530 "trtype": "TCP" 00:14:25.530 }, 00:14:25.530 "peer_address": { 00:14:25.530 "adrfam": "IPv4", 00:14:25.530 "traddr": "10.0.0.1", 00:14:25.530 "trsvcid": "37658", 00:14:25.530 "trtype": "TCP" 00:14:25.530 }, 00:14:25.530 "qid": 0, 00:14:25.530 "state": "enabled", 00:14:25.530 "thread": "nvmf_tgt_poll_group_000" 00:14:25.530 } 00:14:25.530 ]' 00:14:25.530 19:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:25.530 19:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:25.530 19:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:25.530 19:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:25.530 19:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:25.530 19:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.530 19:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.530 19:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.787 19:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:00:MzUyMzVhYjY5NmJkODhiMTlkZTZiYzY5YmM5MTJlNTNmOTYyYzVlMTM4ODgzMDBl0qszYA==: --dhchap-ctrl-secret DHHC-1:03:NjcwMmM0ODAxNTNkYWRhNzRmMGU4MWVmN2E4MmMxY2MwOWJhMjY3ODY0ZGVmNTA2NzEzNDMwMTk5ZWYwODZmMxfCLFY=: 00:14:26.722 19:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.722 19:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:26.722 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.722 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.722 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.722 19:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 00:14:26.722 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.722 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.722 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.722 19:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:26.722 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:26.722 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:26.722 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:26.722 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.722 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:26.722 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.722 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:26.722 19:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:27.288 2024/07/15 19:30:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:27.288 request: 00:14:27.288 { 00:14:27.288 "method": "bdev_nvme_attach_controller", 00:14:27.288 "params": { 00:14:27.288 "name": "nvme0", 00:14:27.288 "trtype": "tcp", 00:14:27.288 "traddr": "10.0.0.2", 00:14:27.288 "adrfam": "ipv4", 00:14:27.288 "trsvcid": "4420", 00:14:27.288 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:27.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055", 00:14:27.288 "prchk_reftag": false, 00:14:27.288 "prchk_guard": false, 00:14:27.288 "hdgst": false, 00:14:27.288 "ddgst": false, 00:14:27.288 "dhchap_key": "key2" 00:14:27.288 } 00:14:27.288 } 00:14:27.288 Got JSON-RPC error response 00:14:27.288 GoRPCClient: error on JSON-RPC call 00:14:27.288 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:27.289 19:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:27.864 2024/07/15 19:30:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:27.864 request: 00:14:27.864 { 00:14:27.864 "method": "bdev_nvme_attach_controller", 00:14:27.864 "params": { 00:14:27.864 "name": "nvme0", 00:14:27.864 "trtype": "tcp", 00:14:27.864 "traddr": "10.0.0.2", 00:14:27.864 "adrfam": "ipv4", 00:14:27.864 "trsvcid": "4420", 00:14:27.864 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:27.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055", 00:14:27.864 "prchk_reftag": false, 00:14:27.864 "prchk_guard": false, 00:14:27.864 "hdgst": false, 00:14:27.864 "ddgst": false, 00:14:27.864 "dhchap_key": "key1", 00:14:27.864 "dhchap_ctrlr_key": "ckey2" 00:14:27.864 } 00:14:27.864 } 00:14:27.864 Got JSON-RPC error response 00:14:27.864 GoRPCClient: error on JSON-RPC call 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key1 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.864 19:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.822 2024/07/15 19:30:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:28.822 request: 00:14:28.822 { 00:14:28.822 "method": "bdev_nvme_attach_controller", 00:14:28.822 "params": { 00:14:28.822 "name": "nvme0", 00:14:28.822 "trtype": "tcp", 00:14:28.822 "traddr": "10.0.0.2", 00:14:28.822 "adrfam": "ipv4", 00:14:28.822 "trsvcid": "4420", 00:14:28.822 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:28.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055", 00:14:28.822 "prchk_reftag": false, 00:14:28.822 "prchk_guard": false, 00:14:28.822 "hdgst": false, 00:14:28.822 "ddgst": false, 00:14:28.822 "dhchap_key": "key1", 00:14:28.822 "dhchap_ctrlr_key": "ckey1" 00:14:28.822 } 00:14:28.822 } 00:14:28.822 Got JSON-RPC error response 00:14:28.822 GoRPCClient: error on JSON-RPC call 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 77900 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 77900 ']' 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 77900 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77900 00:14:28.822 killing process with pid 77900 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77900' 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 77900 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 77900 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=82877 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 82877 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82877 ']' 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.822 19:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.193 19:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.193 19:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:30.193 19:30:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:30.193 19:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:30.193 19:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.193 19:30:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.193 19:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:30.193 19:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 82877 00:14:30.193 19:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82877 ']' 00:14:30.193 19:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.193 19:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:30.193 19:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.193 19:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:30.193 19:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.451 19:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.451 19:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:30.451 19:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:14:30.451 19:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.451 19:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.451 19:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.451 19:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:14:30.451 19:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.451 19:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:30.451 19:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:30.451 19:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:30.451 19:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.451 19:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:14:30.451 19:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.451 19:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.451 19:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.451 19:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:30.451 19:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:31.016 00:14:31.016 19:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:31.016 19:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.016 19:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:31.273 19:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.274 19:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.274 19:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.274 19:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.274 19:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.274 19:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:31.274 { 00:14:31.274 "auth": { 00:14:31.274 "dhgroup": "ffdhe8192", 00:14:31.274 "digest": "sha512", 00:14:31.274 "state": "completed" 00:14:31.274 }, 00:14:31.274 "cntlid": 1, 00:14:31.274 "listen_address": { 00:14:31.274 "adrfam": "IPv4", 00:14:31.274 "traddr": "10.0.0.2", 00:14:31.274 "trsvcid": "4420", 00:14:31.274 "trtype": "TCP" 00:14:31.274 }, 00:14:31.274 "peer_address": { 00:14:31.274 "adrfam": "IPv4", 00:14:31.274 "traddr": "10.0.0.1", 00:14:31.274 "trsvcid": "38528", 00:14:31.274 "trtype": "TCP" 00:14:31.274 }, 00:14:31.274 "qid": 0, 00:14:31.274 "state": "enabled", 00:14:31.274 "thread": "nvmf_tgt_poll_group_000" 00:14:31.274 } 00:14:31.274 ]' 00:14:31.274 19:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:31.532 19:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:31.532 19:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:31.532 19:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:31.532 19:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:31.532 19:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.532 19:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.532 19:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.790 19:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid 679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-secret DHHC-1:03:Mjk1YmIzOTA5YzU1MGMyMTE3M2RkYjg2MzJmOGZmM2IwZDNlMWY0NjcyNjYyYTg5ZjVjZThkYWRkNDdiYjQzNHvEqS8=: 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --dhchap-key key3 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:32.725 19:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:32.982 2024/07/15 19:30:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:32.982 request: 00:14:32.982 { 00:14:32.982 "method": "bdev_nvme_attach_controller", 00:14:32.982 "params": { 00:14:32.982 "name": "nvme0", 00:14:32.982 "trtype": "tcp", 00:14:32.982 "traddr": "10.0.0.2", 00:14:32.982 "adrfam": "ipv4", 00:14:32.982 "trsvcid": "4420", 00:14:32.982 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:32.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055", 00:14:32.982 "prchk_reftag": false, 00:14:32.982 "prchk_guard": false, 00:14:32.982 "hdgst": false, 00:14:32.982 "ddgst": false, 00:14:32.982 "dhchap_key": "key3" 00:14:32.982 } 00:14:32.982 } 00:14:32.982 Got JSON-RPC error response 00:14:32.982 GoRPCClient: error on JSON-RPC call 00:14:32.982 19:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:32.982 19:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:32.982 19:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:32.982 19:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:32.982 19:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:14:32.982 19:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:14:32.982 19:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:32.982 19:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:33.550 19:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:33.550 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:33.550 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:33.550 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:33.550 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:33.550 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:33.550 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:33.550 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:33.550 19:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:33.812 2024/07/15 19:30:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:33.812 request: 00:14:33.812 { 00:14:33.812 "method": "bdev_nvme_attach_controller", 00:14:33.812 "params": { 00:14:33.812 "name": "nvme0", 00:14:33.812 "trtype": "tcp", 00:14:33.812 "traddr": "10.0.0.2", 00:14:33.812 "adrfam": "ipv4", 00:14:33.812 "trsvcid": "4420", 00:14:33.812 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:33.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055", 00:14:33.812 "prchk_reftag": false, 00:14:33.812 "prchk_guard": false, 00:14:33.812 "hdgst": false, 00:14:33.812 "ddgst": false, 00:14:33.812 "dhchap_key": "key3" 00:14:33.812 } 00:14:33.812 } 00:14:33.812 Got JSON-RPC error response 00:14:33.812 GoRPCClient: error on JSON-RPC call 00:14:33.812 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:33.812 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:33.812 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:33.812 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:33.812 19:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:33.812 19:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:14:33.812 19:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:33.812 19:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:33.812 19:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:33.812 19:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:34.071 19:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:34.071 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.071 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.071 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.071 19:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:34.071 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.071 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.071 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.071 19:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:34.071 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:34.071 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:34.071 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:34.071 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:34.071 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:34.071 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:34.071 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:34.071 19:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:34.328 2024/07/15 19:30:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:34.328 request: 00:14:34.328 { 00:14:34.328 "method": "bdev_nvme_attach_controller", 00:14:34.328 "params": { 00:14:34.328 "name": "nvme0", 00:14:34.328 "trtype": "tcp", 00:14:34.328 "traddr": "10.0.0.2", 00:14:34.328 "adrfam": "ipv4", 00:14:34.328 "trsvcid": "4420", 00:14:34.328 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:34.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055", 00:14:34.328 "prchk_reftag": false, 00:14:34.328 "prchk_guard": false, 00:14:34.328 "hdgst": false, 00:14:34.328 "ddgst": false, 00:14:34.328 "dhchap_key": "key0", 00:14:34.328 "dhchap_ctrlr_key": "key1" 00:14:34.328 } 00:14:34.328 } 00:14:34.328 Got JSON-RPC error response 00:14:34.328 GoRPCClient: error on JSON-RPC call 00:14:34.328 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:34.328 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:34.328 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:34.328 19:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:34.328 19:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:34.328 19:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:34.631 00:14:34.631 19:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:14:34.631 19:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:14:34.631 19:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.896 19:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.896 19:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.896 19:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.154 19:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:14:35.154 19:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:14:35.154 19:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 77931 00:14:35.154 19:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 77931 ']' 00:14:35.154 19:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 77931 00:14:35.154 19:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:35.154 19:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:35.154 19:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77931 00:14:35.154 killing process with pid 77931 00:14:35.154 19:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:35.154 19:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:35.154 19:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77931' 00:14:35.154 19:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 77931 00:14:35.154 19:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 77931 00:14:35.412 19:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:35.412 19:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:35.412 19:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:14:35.412 19:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:35.412 19:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:14:35.412 19:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:35.412 19:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:35.412 rmmod nvme_tcp 00:14:35.412 rmmod nvme_fabrics 00:14:35.412 rmmod nvme_keyring 00:14:35.412 19:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:35.412 19:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:14:35.412 19:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:14:35.412 19:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 82877 ']' 00:14:35.412 19:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 82877 00:14:35.412 19:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 82877 ']' 00:14:35.412 19:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 82877 00:14:35.412 19:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:35.412 19:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:35.412 19:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82877 00:14:35.671 killing process with pid 82877 00:14:35.671 19:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:35.671 19:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:35.671 19:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82877' 00:14:35.671 19:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 82877 00:14:35.671 19:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 82877 00:14:35.671 19:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:35.671 19:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:35.671 19:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:35.671 19:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:35.671 19:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:35.671 19:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.671 19:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.671 19:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.671 19:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:35.671 19:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.g6k /tmp/spdk.key-sha256.fZg /tmp/spdk.key-sha384.neV /tmp/spdk.key-sha512.Xni /tmp/spdk.key-sha512.9qB /tmp/spdk.key-sha384.uBv /tmp/spdk.key-sha256.hxB '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:35.671 ************************************ 00:14:35.671 END TEST nvmf_auth_target 00:14:35.671 00:14:35.671 real 3m1.252s 00:14:35.671 user 7m20.115s 00:14:35.671 sys 0m22.193s 00:14:35.671 19:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:35.671 19:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.671 ************************************ 00:14:35.929 19:30:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:35.929 19:30:25 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:14:35.929 19:30:25 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:35.929 19:30:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:35.929 19:30:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.929 19:30:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:35.929 ************************************ 00:14:35.929 START TEST nvmf_bdevio_no_huge 00:14:35.929 ************************************ 00:14:35.929 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:35.929 * Looking for test storage... 00:14:35.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:35.929 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:35.929 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:35.929 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.929 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.929 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.929 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.929 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.929 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.929 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.929 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.929 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.929 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.929 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:35.929 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:35.929 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:35.930 Cannot find device "nvmf_tgt_br" 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:35.930 Cannot find device "nvmf_tgt_br2" 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:35.930 Cannot find device "nvmf_tgt_br" 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:35.930 Cannot find device "nvmf_tgt_br2" 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:35.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:35.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:35.930 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:36.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:14:36.189 00:14:36.189 --- 10.0.0.2 ping statistics --- 00:14:36.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.189 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:36.189 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:36.189 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:14:36.189 00:14:36.189 --- 10.0.0.3 ping statistics --- 00:14:36.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.189 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:36.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:14:36.189 00:14:36.189 --- 10.0.0.1 ping statistics --- 00:14:36.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.189 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=83282 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 83282 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 83282 ']' 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:36.189 19:30:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:36.189 [2024-07-15 19:30:25.986527] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:14:36.189 [2024-07-15 19:30:25.986628] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:36.447 [2024-07-15 19:30:26.127157] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:36.705 [2024-07-15 19:30:26.270486] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.705 [2024-07-15 19:30:26.270552] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.705 [2024-07-15 19:30:26.270564] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.705 [2024-07-15 19:30:26.270572] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.705 [2024-07-15 19:30:26.270580] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.705 [2024-07-15 19:30:26.270746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:36.705 [2024-07-15 19:30:26.270887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:36.705 [2024-07-15 19:30:26.271113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:36.705 [2024-07-15 19:30:26.271117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:37.272 19:30:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:37.272 19:30:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:14:37.272 19:30:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:37.272 19:30:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:37.272 19:30:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:37.272 [2024-07-15 19:30:27.025538] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:37.272 Malloc0 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:37.272 [2024-07-15 19:30:27.063590] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:37.272 { 00:14:37.272 "params": { 00:14:37.272 "name": "Nvme$subsystem", 00:14:37.272 "trtype": "$TEST_TRANSPORT", 00:14:37.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:37.272 "adrfam": "ipv4", 00:14:37.272 "trsvcid": "$NVMF_PORT", 00:14:37.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:37.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:37.272 "hdgst": ${hdgst:-false}, 00:14:37.272 "ddgst": ${ddgst:-false} 00:14:37.272 }, 00:14:37.272 "method": "bdev_nvme_attach_controller" 00:14:37.272 } 00:14:37.272 EOF 00:14:37.272 )") 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:14:37.272 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:14:37.531 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:14:37.531 19:30:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:37.531 "params": { 00:14:37.531 "name": "Nvme1", 00:14:37.531 "trtype": "tcp", 00:14:37.531 "traddr": "10.0.0.2", 00:14:37.531 "adrfam": "ipv4", 00:14:37.531 "trsvcid": "4420", 00:14:37.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:37.531 "hdgst": false, 00:14:37.531 "ddgst": false 00:14:37.531 }, 00:14:37.531 "method": "bdev_nvme_attach_controller" 00:14:37.531 }' 00:14:37.531 [2024-07-15 19:30:27.129957] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:14:37.531 [2024-07-15 19:30:27.130100] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid83336 ] 00:14:37.531 [2024-07-15 19:30:27.282652] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:37.789 [2024-07-15 19:30:27.411160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.789 [2024-07-15 19:30:27.411238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.789 [2024-07-15 19:30:27.411245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.789 I/O targets: 00:14:37.789 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:37.789 00:14:37.789 00:14:37.789 CUnit - A unit testing framework for C - Version 2.1-3 00:14:37.789 http://cunit.sourceforge.net/ 00:14:37.789 00:14:37.789 00:14:37.789 Suite: bdevio tests on: Nvme1n1 00:14:38.046 Test: blockdev write read block ...passed 00:14:38.046 Test: blockdev write zeroes read block ...passed 00:14:38.046 Test: blockdev write zeroes read no split ...passed 00:14:38.046 Test: blockdev write zeroes read split ...passed 00:14:38.046 Test: blockdev write zeroes read split partial ...passed 00:14:38.046 Test: blockdev reset ...[2024-07-15 19:30:27.721483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:38.046 [2024-07-15 19:30:27.721611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb2600 (9): Bad file descriptor 00:14:38.046 [2024-07-15 19:30:27.736618] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:38.046 passed 00:14:38.046 Test: blockdev write read 8 blocks ...passed 00:14:38.046 Test: blockdev write read size > 128k ...passed 00:14:38.046 Test: blockdev write read invalid size ...passed 00:14:38.046 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:38.046 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:38.046 Test: blockdev write read max offset ...passed 00:14:38.304 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:38.304 Test: blockdev writev readv 8 blocks ...passed 00:14:38.304 Test: blockdev writev readv 30 x 1block ...passed 00:14:38.304 Test: blockdev writev readv block ...passed 00:14:38.304 Test: blockdev writev readv size > 128k ...passed 00:14:38.304 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:38.304 Test: blockdev comparev and writev ...[2024-07-15 19:30:27.910066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:38.304 [2024-07-15 19:30:27.910149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:38.304 [2024-07-15 19:30:27.910180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:38.304 [2024-07-15 19:30:27.910198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:38.304 [2024-07-15 19:30:27.910680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:38.304 [2024-07-15 19:30:27.910726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:38.304 [2024-07-15 19:30:27.910756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:38.304 [2024-07-15 19:30:27.910775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:38.304 [2024-07-15 19:30:27.911293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:38.304 [2024-07-15 19:30:27.911340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:38.304 [2024-07-15 19:30:27.911386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:38.304 [2024-07-15 19:30:27.911406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:38.304 [2024-07-15 19:30:27.911875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:38.304 [2024-07-15 19:30:27.911921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:38.304 [2024-07-15 19:30:27.911948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:38.304 [2024-07-15 19:30:27.911966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:38.304 passed 00:14:38.304 Test: blockdev nvme passthru rw ...passed 00:14:38.304 Test: blockdev nvme passthru vendor specific ...[2024-07-15 19:30:27.994900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:38.304 [2024-07-15 19:30:27.994975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:38.304 [2024-07-15 19:30:27.995200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:38.305 [2024-07-15 19:30:27.995235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:38.305 [2024-07-15 19:30:27.995445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:38.305 [2024-07-15 19:30:27.995478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:38.305 [2024-07-15 19:30:27.995668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:38.305 [2024-07-15 19:30:27.995711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:38.305 passed 00:14:38.305 Test: blockdev nvme admin passthru ...passed 00:14:38.305 Test: blockdev copy ...passed 00:14:38.305 00:14:38.305 Run Summary: Type Total Ran Passed Failed Inactive 00:14:38.305 suites 1 1 n/a 0 0 00:14:38.305 tests 23 23 23 0 0 00:14:38.305 asserts 152 152 152 0 n/a 00:14:38.305 00:14:38.305 Elapsed time = 0.935 seconds 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:38.870 rmmod nvme_tcp 00:14:38.870 rmmod nvme_fabrics 00:14:38.870 rmmod nvme_keyring 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 83282 ']' 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 83282 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 83282 ']' 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 83282 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83282 00:14:38.870 killing process with pid 83282 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83282' 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 83282 00:14:38.870 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 83282 00:14:39.437 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:39.437 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:39.437 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:39.437 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:39.437 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:39.437 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.437 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.437 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.437 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:39.437 ************************************ 00:14:39.437 END TEST nvmf_bdevio_no_huge 00:14:39.437 ************************************ 00:14:39.437 00:14:39.437 real 0m3.485s 00:14:39.437 user 0m12.623s 00:14:39.437 sys 0m1.218s 00:14:39.437 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:39.437 19:30:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:39.437 19:30:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:39.437 19:30:29 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:39.437 19:30:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:39.437 19:30:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:39.437 19:30:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:39.437 ************************************ 00:14:39.437 START TEST nvmf_tls 00:14:39.437 ************************************ 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:39.437 * Looking for test storage... 00:14:39.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:39.437 Cannot find device "nvmf_tgt_br" 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:39.437 Cannot find device "nvmf_tgt_br2" 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:39.437 Cannot find device "nvmf_tgt_br" 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:14:39.437 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:39.437 Cannot find device "nvmf_tgt_br2" 00:14:39.438 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:14:39.438 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:39.438 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:39.438 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:39.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:39.438 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:39.438 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:39.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:39.438 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:39.438 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:39.696 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:39.696 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:39.696 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:39.696 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:39.696 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:39.696 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:39.696 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:39.696 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:39.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:39.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:14:39.697 00:14:39.697 --- 10.0.0.2 ping statistics --- 00:14:39.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.697 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:39.697 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:39.697 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:14:39.697 00:14:39.697 --- 10.0.0.3 ping statistics --- 00:14:39.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.697 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:39.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:39.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:14:39.697 00:14:39.697 --- 10.0.0.1 ping statistics --- 00:14:39.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.697 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83521 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83521 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83521 ']' 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:39.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:39.697 19:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.955 [2024-07-15 19:30:29.526506] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:14:39.955 [2024-07-15 19:30:29.526600] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.955 [2024-07-15 19:30:29.663209] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.955 [2024-07-15 19:30:29.724301] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.955 [2024-07-15 19:30:29.724367] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.955 [2024-07-15 19:30:29.724379] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.955 [2024-07-15 19:30:29.724387] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.955 [2024-07-15 19:30:29.724394] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.955 [2024-07-15 19:30:29.724426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.886 19:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.886 19:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:40.886 19:30:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:40.886 19:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:40.886 19:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.886 19:30:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.886 19:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:14:40.886 19:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:41.143 true 00:14:41.143 19:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:14:41.143 19:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:41.400 19:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:14:41.400 19:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:14:41.400 19:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:41.658 19:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:41.658 19:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:14:41.915 19:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:14:41.915 19:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:14:41.915 19:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:42.526 19:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:42.526 19:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:14:42.783 19:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:14:42.783 19:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:14:42.783 19:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:42.783 19:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:14:43.040 19:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:14:43.040 19:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:14:43.040 19:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:43.297 19:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:43.297 19:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:14:43.555 19:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:14:43.555 19:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:14:43.555 19:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:43.829 19:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:43.829 19:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:14:44.396 19:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:14:44.396 19:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:14:44.396 19:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:44.396 19:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:44.396 19:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:44.396 19:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:44.396 19:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:14:44.396 19:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:44.396 19:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:44.396 19:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:44.396 19:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:44.396 19:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:44.396 19:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:44.396 19:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:44.396 19:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:14:44.396 19:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:44.396 19:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:44.396 19:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:44.396 19:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:14:44.396 19:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.kDESYgoLNW 00:14:44.396 19:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:44.396 19:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.i58FJ0Ywj5 00:14:44.396 19:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:44.397 19:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:44.397 19:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.kDESYgoLNW 00:14:44.397 19:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.i58FJ0Ywj5 00:14:44.397 19:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:44.670 19:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:44.929 19:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.kDESYgoLNW 00:14:44.929 19:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kDESYgoLNW 00:14:44.929 19:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:45.186 [2024-07-15 19:30:34.936115] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.186 19:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:45.444 19:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:45.702 [2024-07-15 19:30:35.472240] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:45.702 [2024-07-15 19:30:35.473254] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.702 19:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:46.268 malloc0 00:14:46.268 19:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:46.526 19:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kDESYgoLNW 00:14:46.783 [2024-07-15 19:30:36.347215] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:46.783 19:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.kDESYgoLNW 00:14:56.768 Initializing NVMe Controllers 00:14:56.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:56.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:56.768 Initialization complete. Launching workers. 00:14:56.768 ======================================================== 00:14:56.769 Latency(us) 00:14:56.769 Device Information : IOPS MiB/s Average min max 00:14:56.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7058.70 27.57 9070.29 1717.57 12874.63 00:14:56.769 ======================================================== 00:14:56.769 Total : 7058.70 27.57 9070.29 1717.57 12874.63 00:14:56.769 00:14:57.059 19:30:46 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kDESYgoLNW 00:14:57.059 19:30:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:57.059 19:30:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:57.059 19:30:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:57.059 19:30:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kDESYgoLNW' 00:14:57.059 19:30:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:57.059 19:30:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83887 00:14:57.059 19:30:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:57.059 19:30:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:57.059 19:30:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83887 /var/tmp/bdevperf.sock 00:14:57.059 19:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83887 ']' 00:14:57.059 19:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:57.059 19:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.059 19:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:57.059 19:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.059 19:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:57.059 [2024-07-15 19:30:46.632347] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:14:57.059 [2024-07-15 19:30:46.633106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83887 ] 00:14:57.059 [2024-07-15 19:30:46.773538] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.327 [2024-07-15 19:30:46.863672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.327 19:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.327 19:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:57.327 19:30:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kDESYgoLNW 00:14:57.587 [2024-07-15 19:30:47.317206] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:57.587 [2024-07-15 19:30:47.317397] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:57.846 TLSTESTn1 00:14:57.846 19:30:47 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:57.846 Running I/O for 10 seconds... 00:15:07.809 00:15:07.809 Latency(us) 00:15:07.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.809 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:07.809 Verification LBA range: start 0x0 length 0x2000 00:15:07.809 TLSTESTn1 : 10.02 2829.21 11.05 0.00 0.00 45156.30 7983.48 45279.42 00:15:07.809 =================================================================================================================== 00:15:07.809 Total : 2829.21 11.05 0.00 0.00 45156.30 7983.48 45279.42 00:15:07.809 0 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 83887 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83887 ']' 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83887 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83887 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:08.067 killing process with pid 83887 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83887' 00:15:08.067 Received shutdown signal, test time was about 10.000000 seconds 00:15:08.067 00:15:08.067 Latency(us) 00:15:08.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.067 =================================================================================================================== 00:15:08.067 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83887 00:15:08.067 [2024-07-15 19:30:57.654425] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83887 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.i58FJ0Ywj5 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.i58FJ0Ywj5 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.i58FJ0Ywj5 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.i58FJ0Ywj5' 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84020 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84020 /var/tmp/bdevperf.sock 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84020 ']' 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.067 19:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.325 [2024-07-15 19:30:57.889485] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:08.325 [2024-07-15 19:30:57.889588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84020 ] 00:15:08.325 [2024-07-15 19:30:58.022924] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.325 [2024-07-15 19:30:58.084976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.582 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.582 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:08.582 19:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.i58FJ0Ywj5 00:15:08.840 [2024-07-15 19:30:58.494690] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:08.840 [2024-07-15 19:30:58.494851] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:08.840 [2024-07-15 19:30:58.499995] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:08.840 [2024-07-15 19:30:58.500527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1205e50 (107): Transport endpoint is not connected 00:15:08.840 [2024-07-15 19:30:58.501507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1205e50 (9): Bad file descriptor 00:15:08.840 [2024-07-15 19:30:58.502503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:08.840 [2024-07-15 19:30:58.502532] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:08.840 [2024-07-15 19:30:58.502548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:08.840 2024/07/15 19:30:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.i58FJ0Ywj5 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:08.840 request: 00:15:08.840 { 00:15:08.840 "method": "bdev_nvme_attach_controller", 00:15:08.840 "params": { 00:15:08.840 "name": "TLSTEST", 00:15:08.840 "trtype": "tcp", 00:15:08.840 "traddr": "10.0.0.2", 00:15:08.840 "adrfam": "ipv4", 00:15:08.840 "trsvcid": "4420", 00:15:08.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:08.840 "prchk_reftag": false, 00:15:08.840 "prchk_guard": false, 00:15:08.840 "hdgst": false, 00:15:08.840 "ddgst": false, 00:15:08.840 "psk": "/tmp/tmp.i58FJ0Ywj5" 00:15:08.840 } 00:15:08.840 } 00:15:08.840 Got JSON-RPC error response 00:15:08.840 GoRPCClient: error on JSON-RPC call 00:15:08.840 19:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84020 00:15:08.840 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84020 ']' 00:15:08.840 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84020 00:15:08.840 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:08.840 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:08.840 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84020 00:15:08.840 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:08.840 killing process with pid 84020 00:15:08.840 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:08.840 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84020' 00:15:08.840 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84020 00:15:08.840 Received shutdown signal, test time was about 10.000000 seconds 00:15:08.840 00:15:08.840 Latency(us) 00:15:08.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.840 =================================================================================================================== 00:15:08.840 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:08.840 [2024-07-15 19:30:58.545493] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:08.840 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84020 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kDESYgoLNW 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kDESYgoLNW 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kDESYgoLNW 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kDESYgoLNW' 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84052 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84052 /var/tmp/bdevperf.sock 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84052 ']' 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:09.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.098 19:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:09.098 [2024-07-15 19:30:58.759919] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:09.098 [2024-07-15 19:30:58.760030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84052 ] 00:15:09.098 [2024-07-15 19:30:58.895088] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.355 [2024-07-15 19:30:58.955649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.287 19:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.287 19:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:10.287 19:30:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.kDESYgoLNW 00:15:10.546 [2024-07-15 19:31:00.245073] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:10.546 [2024-07-15 19:31:00.245194] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:10.546 [2024-07-15 19:31:00.250086] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:10.546 [2024-07-15 19:31:00.250136] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:10.546 [2024-07-15 19:31:00.250196] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:10.546 [2024-07-15 19:31:00.250788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x902e50 (107): Transport endpoint is not connected 00:15:10.546 [2024-07-15 19:31:00.251765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x902e50 (9): Bad file descriptor 00:15:10.546 [2024-07-15 19:31:00.252760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:10.546 [2024-07-15 19:31:00.252795] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:10.546 [2024-07-15 19:31:00.252812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:10.546 2024/07/15 19:31:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.kDESYgoLNW subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:10.546 request: 00:15:10.546 { 00:15:10.546 "method": "bdev_nvme_attach_controller", 00:15:10.546 "params": { 00:15:10.546 "name": "TLSTEST", 00:15:10.546 "trtype": "tcp", 00:15:10.546 "traddr": "10.0.0.2", 00:15:10.546 "adrfam": "ipv4", 00:15:10.546 "trsvcid": "4420", 00:15:10.546 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.546 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:10.546 "prchk_reftag": false, 00:15:10.546 "prchk_guard": false, 00:15:10.546 "hdgst": false, 00:15:10.546 "ddgst": false, 00:15:10.546 "psk": "/tmp/tmp.kDESYgoLNW" 00:15:10.546 } 00:15:10.546 } 00:15:10.546 Got JSON-RPC error response 00:15:10.546 GoRPCClient: error on JSON-RPC call 00:15:10.546 19:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84052 00:15:10.546 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84052 ']' 00:15:10.546 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84052 00:15:10.546 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:10.546 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:10.546 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84052 00:15:10.546 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:10.546 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:10.546 killing process with pid 84052 00:15:10.546 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84052' 00:15:10.546 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84052 00:15:10.546 Received shutdown signal, test time was about 10.000000 seconds 00:15:10.546 00:15:10.546 Latency(us) 00:15:10.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.546 =================================================================================================================== 00:15:10.546 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:10.546 [2024-07-15 19:31:00.300246] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:10.546 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84052 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kDESYgoLNW 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kDESYgoLNW 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kDESYgoLNW 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kDESYgoLNW' 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84102 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84102 /var/tmp/bdevperf.sock 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84102 ']' 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:10.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:10.804 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:10.804 [2024-07-15 19:31:00.539375] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:10.804 [2024-07-15 19:31:00.539471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84102 ] 00:15:11.060 [2024-07-15 19:31:00.670897] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.060 [2024-07-15 19:31:00.739877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.316 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:11.316 19:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:11.316 19:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kDESYgoLNW 00:15:11.573 [2024-07-15 19:31:01.167587] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:11.573 [2024-07-15 19:31:01.167759] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:11.573 [2024-07-15 19:31:01.177650] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:11.573 [2024-07-15 19:31:01.177708] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:11.573 [2024-07-15 19:31:01.177782] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:11.573 [2024-07-15 19:31:01.178569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2063e50 (107): Transport endpoint is not connected 00:15:11.573 [2024-07-15 19:31:01.179539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2063e50 (9): Bad file descriptor 00:15:11.573 [2024-07-15 19:31:01.180535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:15:11.573 [2024-07-15 19:31:01.180571] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:11.573 [2024-07-15 19:31:01.180587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:15:11.573 2024/07/15 19:31:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.kDESYgoLNW subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:11.573 request: 00:15:11.573 { 00:15:11.573 "method": "bdev_nvme_attach_controller", 00:15:11.573 "params": { 00:15:11.573 "name": "TLSTEST", 00:15:11.573 "trtype": "tcp", 00:15:11.573 "traddr": "10.0.0.2", 00:15:11.573 "adrfam": "ipv4", 00:15:11.573 "trsvcid": "4420", 00:15:11.573 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:11.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:11.573 "prchk_reftag": false, 00:15:11.573 "prchk_guard": false, 00:15:11.573 "hdgst": false, 00:15:11.573 "ddgst": false, 00:15:11.573 "psk": "/tmp/tmp.kDESYgoLNW" 00:15:11.573 } 00:15:11.573 } 00:15:11.573 Got JSON-RPC error response 00:15:11.573 GoRPCClient: error on JSON-RPC call 00:15:11.573 19:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84102 00:15:11.573 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84102 ']' 00:15:11.573 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84102 00:15:11.573 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:11.573 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:11.573 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84102 00:15:11.573 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:11.573 killing process with pid 84102 00:15:11.573 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:11.573 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84102' 00:15:11.573 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84102 00:15:11.573 Received shutdown signal, test time was about 10.000000 seconds 00:15:11.573 00:15:11.573 Latency(us) 00:15:11.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.573 =================================================================================================================== 00:15:11.573 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:11.573 [2024-07-15 19:31:01.233398] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:11.573 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84102 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84130 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84130 /var/tmp/bdevperf.sock 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84130 ']' 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:11.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:11.831 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.831 [2024-07-15 19:31:01.451109] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:11.831 [2024-07-15 19:31:01.451203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84130 ] 00:15:11.831 [2024-07-15 19:31:01.589017] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.088 [2024-07-15 19:31:01.651268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.088 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:12.088 19:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:12.088 19:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:12.655 [2024-07-15 19:31:02.237686] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:12.655 [2024-07-15 19:31:02.239667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68f3e0 (9): Bad file descriptor 00:15:12.655 [2024-07-15 19:31:02.240660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:12.655 [2024-07-15 19:31:02.240714] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:12.655 [2024-07-15 19:31:02.240733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:12.655 2024/07/15 19:31:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:12.655 request: 00:15:12.655 { 00:15:12.655 "method": "bdev_nvme_attach_controller", 00:15:12.655 "params": { 00:15:12.655 "name": "TLSTEST", 00:15:12.655 "trtype": "tcp", 00:15:12.655 "traddr": "10.0.0.2", 00:15:12.655 "adrfam": "ipv4", 00:15:12.655 "trsvcid": "4420", 00:15:12.655 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.655 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:12.655 "prchk_reftag": false, 00:15:12.655 "prchk_guard": false, 00:15:12.655 "hdgst": false, 00:15:12.655 "ddgst": false 00:15:12.655 } 00:15:12.655 } 00:15:12.655 Got JSON-RPC error response 00:15:12.655 GoRPCClient: error on JSON-RPC call 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84130 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84130 ']' 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84130 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84130 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:12.655 killing process with pid 84130 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84130' 00:15:12.655 Received shutdown signal, test time was about 10.000000 seconds 00:15:12.655 00:15:12.655 Latency(us) 00:15:12.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.655 =================================================================================================================== 00:15:12.655 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84130 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84130 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 83521 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83521 ']' 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83521 00:15:12.655 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83521 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:12.914 killing process with pid 83521 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83521' 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83521 00:15:12.914 [2024-07-15 19:31:02.476818] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83521 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.2n47uQD22B 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.2n47uQD22B 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84172 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84172 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84172 ']' 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.914 19:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:13.171 [2024-07-15 19:31:02.778052] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:13.171 [2024-07-15 19:31:02.778194] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.171 [2024-07-15 19:31:02.919789] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.429 [2024-07-15 19:31:03.005700] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.429 [2024-07-15 19:31:03.005785] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.429 [2024-07-15 19:31:03.005808] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.429 [2024-07-15 19:31:03.005822] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.429 [2024-07-15 19:31:03.005834] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.429 [2024-07-15 19:31:03.005879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.026 19:31:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.026 19:31:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:14.026 19:31:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:14.026 19:31:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:14.026 19:31:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.026 19:31:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.026 19:31:03 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.2n47uQD22B 00:15:14.026 19:31:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.2n47uQD22B 00:15:14.026 19:31:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:14.297 [2024-07-15 19:31:04.054226] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.297 19:31:04 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:14.862 19:31:04 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:15.119 [2024-07-15 19:31:04.878566] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:15.119 [2024-07-15 19:31:04.878869] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.119 19:31:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:15.683 malloc0 00:15:15.683 19:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:15.940 19:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2n47uQD22B 00:15:16.198 [2024-07-15 19:31:05.915491] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:16.198 19:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2n47uQD22B 00:15:16.198 19:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:16.198 19:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:16.198 19:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:16.198 19:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2n47uQD22B' 00:15:16.198 19:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:16.198 19:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84280 00:15:16.198 19:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:16.198 19:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:16.198 19:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84280 /var/tmp/bdevperf.sock 00:15:16.198 19:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84280 ']' 00:15:16.198 19:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:16.198 19:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:16.198 19:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:16.198 19:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.198 19:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:16.198 [2024-07-15 19:31:05.987086] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:16.198 [2024-07-15 19:31:05.987226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84280 ] 00:15:16.455 [2024-07-15 19:31:06.127500] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.455 [2024-07-15 19:31:06.216292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.387 19:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.387 19:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:17.387 19:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2n47uQD22B 00:15:17.644 [2024-07-15 19:31:07.410680] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:17.644 [2024-07-15 19:31:07.410860] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:17.902 TLSTESTn1 00:15:17.902 19:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:17.902 Running I/O for 10 seconds... 00:15:30.102 00:15:30.102 Latency(us) 00:15:30.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.102 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:30.102 Verification LBA range: start 0x0 length 0x2000 00:15:30.102 TLSTESTn1 : 10.04 3040.50 11.88 0.00 0.00 41975.10 8102.63 37415.10 00:15:30.102 =================================================================================================================== 00:15:30.102 Total : 3040.50 11.88 0.00 0.00 41975.10 8102.63 37415.10 00:15:30.102 0 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 84280 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84280 ']' 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84280 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84280 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:30.102 killing process with pid 84280 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84280' 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84280 00:15:30.102 Received shutdown signal, test time was about 10.000000 seconds 00:15:30.102 00:15:30.102 Latency(us) 00:15:30.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.102 =================================================================================================================== 00:15:30.102 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:30.102 [2024-07-15 19:31:17.778389] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84280 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.2n47uQD22B 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2n47uQD22B 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2n47uQD22B 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2n47uQD22B 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2n47uQD22B' 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84433 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84433 /var/tmp/bdevperf.sock 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84433 ']' 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.102 19:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.102 [2024-07-15 19:31:18.017700] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:30.102 [2024-07-15 19:31:18.017827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84433 ] 00:15:30.102 [2024-07-15 19:31:18.158471] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.102 [2024-07-15 19:31:18.219293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.102 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.102 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:30.102 19:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2n47uQD22B 00:15:30.102 [2024-07-15 19:31:18.532376] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:30.102 [2024-07-15 19:31:18.532459] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:30.102 [2024-07-15 19:31:18.532471] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.2n47uQD22B 00:15:30.103 2024/07/15 19:31:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.2n47uQD22B subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:15:30.103 request: 00:15:30.103 { 00:15:30.103 "method": "bdev_nvme_attach_controller", 00:15:30.103 "params": { 00:15:30.103 "name": "TLSTEST", 00:15:30.103 "trtype": "tcp", 00:15:30.103 "traddr": "10.0.0.2", 00:15:30.103 "adrfam": "ipv4", 00:15:30.103 "trsvcid": "4420", 00:15:30.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.103 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:30.103 "prchk_reftag": false, 00:15:30.103 "prchk_guard": false, 00:15:30.103 "hdgst": false, 00:15:30.103 "ddgst": false, 00:15:30.103 "psk": "/tmp/tmp.2n47uQD22B" 00:15:30.103 } 00:15:30.103 } 00:15:30.103 Got JSON-RPC error response 00:15:30.103 GoRPCClient: error on JSON-RPC call 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84433 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84433 ']' 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84433 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84433 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:30.103 killing process with pid 84433 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84433' 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84433 00:15:30.103 Received shutdown signal, test time was about 10.000000 seconds 00:15:30.103 00:15:30.103 Latency(us) 00:15:30.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.103 =================================================================================================================== 00:15:30.103 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84433 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 84172 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84172 ']' 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84172 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84172 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:30.103 killing process with pid 84172 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84172' 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84172 00:15:30.103 [2024-07-15 19:31:18.765260] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84172 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84470 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84470 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84470 ']' 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.103 19:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.103 [2024-07-15 19:31:19.036673] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:30.103 [2024-07-15 19:31:19.036820] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.103 [2024-07-15 19:31:19.178841] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.103 [2024-07-15 19:31:19.241724] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.103 [2024-07-15 19:31:19.241794] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.103 [2024-07-15 19:31:19.241805] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.103 [2024-07-15 19:31:19.241814] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.103 [2024-07-15 19:31:19.241821] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.103 [2024-07-15 19:31:19.241850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.361 19:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.361 19:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:30.361 19:31:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:30.361 19:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:30.361 19:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.361 19:31:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.361 19:31:20 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.2n47uQD22B 00:15:30.361 19:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:30.361 19:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.2n47uQD22B 00:15:30.361 19:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:15:30.361 19:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:30.361 19:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:15:30.361 19:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:30.361 19:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.2n47uQD22B 00:15:30.361 19:31:20 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.2n47uQD22B 00:15:30.361 19:31:20 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:30.618 [2024-07-15 19:31:20.337195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.618 19:31:20 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:30.876 19:31:20 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:31.134 [2024-07-15 19:31:20.921276] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:31.134 [2024-07-15 19:31:20.921532] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:31.391 19:31:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:31.391 malloc0 00:15:31.650 19:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:31.908 19:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2n47uQD22B 00:15:32.166 [2024-07-15 19:31:21.776496] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:32.166 [2024-07-15 19:31:21.777021] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:15:32.166 [2024-07-15 19:31:21.777161] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:32.166 2024/07/15 19:31:21 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.2n47uQD22B], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:15:32.166 request: 00:15:32.166 { 00:15:32.166 "method": "nvmf_subsystem_add_host", 00:15:32.166 "params": { 00:15:32.166 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.166 "host": "nqn.2016-06.io.spdk:host1", 00:15:32.166 "psk": "/tmp/tmp.2n47uQD22B" 00:15:32.166 } 00:15:32.166 } 00:15:32.166 Got JSON-RPC error response 00:15:32.166 GoRPCClient: error on JSON-RPC call 00:15:32.166 19:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:32.166 19:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:32.166 19:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:32.166 19:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:32.166 19:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 84470 00:15:32.166 19:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84470 ']' 00:15:32.166 19:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84470 00:15:32.166 19:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:32.166 19:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:32.166 19:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84470 00:15:32.166 19:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:32.166 killing process with pid 84470 00:15:32.166 19:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:32.166 19:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84470' 00:15:32.166 19:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84470 00:15:32.166 19:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84470 00:15:32.423 19:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.2n47uQD22B 00:15:32.423 19:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:15:32.423 19:31:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:32.423 19:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:32.423 19:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.423 19:31:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84581 00:15:32.423 19:31:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84581 00:15:32.423 19:31:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:32.423 19:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84581 ']' 00:15:32.423 19:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.423 19:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:32.423 19:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.423 19:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:32.423 19:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.423 [2024-07-15 19:31:22.092274] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:32.423 [2024-07-15 19:31:22.092434] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.680 [2024-07-15 19:31:22.238192] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.680 [2024-07-15 19:31:22.305557] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.680 [2024-07-15 19:31:22.305630] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.680 [2024-07-15 19:31:22.305649] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.680 [2024-07-15 19:31:22.305662] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.680 [2024-07-15 19:31:22.305674] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.680 [2024-07-15 19:31:22.305709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.612 19:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:33.612 19:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:33.612 19:31:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:33.612 19:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:33.612 19:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.612 19:31:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.612 19:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.2n47uQD22B 00:15:33.612 19:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.2n47uQD22B 00:15:33.612 19:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:33.869 [2024-07-15 19:31:23.469401] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.869 19:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:34.434 19:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:34.691 [2024-07-15 19:31:24.265756] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:34.691 [2024-07-15 19:31:24.266321] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.691 19:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:34.948 malloc0 00:15:34.948 19:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:35.204 19:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2n47uQD22B 00:15:35.769 [2024-07-15 19:31:25.373044] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:35.769 19:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=84689 00:15:35.769 19:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:35.769 19:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:35.769 19:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 84689 /var/tmp/bdevperf.sock 00:15:35.769 19:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84689 ']' 00:15:35.769 19:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:35.769 19:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:35.769 19:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:35.769 19:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.769 19:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:35.769 [2024-07-15 19:31:25.480539] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:35.769 [2024-07-15 19:31:25.480695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84689 ] 00:15:36.027 [2024-07-15 19:31:25.623382] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.027 [2024-07-15 19:31:25.710939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.027 19:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.027 19:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:36.027 19:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2n47uQD22B 00:15:36.592 [2024-07-15 19:31:26.155649] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:36.592 [2024-07-15 19:31:26.155806] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:36.592 TLSTESTn1 00:15:36.592 19:31:26 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:37.158 19:31:26 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:15:37.158 "subsystems": [ 00:15:37.158 { 00:15:37.158 "subsystem": "keyring", 00:15:37.158 "config": [] 00:15:37.158 }, 00:15:37.158 { 00:15:37.158 "subsystem": "iobuf", 00:15:37.158 "config": [ 00:15:37.158 { 00:15:37.158 "method": "iobuf_set_options", 00:15:37.158 "params": { 00:15:37.158 "large_bufsize": 135168, 00:15:37.158 "large_pool_count": 1024, 00:15:37.158 "small_bufsize": 8192, 00:15:37.158 "small_pool_count": 8192 00:15:37.158 } 00:15:37.158 } 00:15:37.158 ] 00:15:37.158 }, 00:15:37.158 { 00:15:37.158 "subsystem": "sock", 00:15:37.158 "config": [ 00:15:37.158 { 00:15:37.158 "method": "sock_set_default_impl", 00:15:37.158 "params": { 00:15:37.158 "impl_name": "posix" 00:15:37.158 } 00:15:37.158 }, 00:15:37.158 { 00:15:37.158 "method": "sock_impl_set_options", 00:15:37.158 "params": { 00:15:37.158 "enable_ktls": false, 00:15:37.158 "enable_placement_id": 0, 00:15:37.158 "enable_quickack": false, 00:15:37.158 "enable_recv_pipe": true, 00:15:37.158 "enable_zerocopy_send_client": false, 00:15:37.158 "enable_zerocopy_send_server": true, 00:15:37.158 "impl_name": "ssl", 00:15:37.158 "recv_buf_size": 4096, 00:15:37.158 "send_buf_size": 4096, 00:15:37.158 "tls_version": 0, 00:15:37.158 "zerocopy_threshold": 0 00:15:37.158 } 00:15:37.158 }, 00:15:37.158 { 00:15:37.158 "method": "sock_impl_set_options", 00:15:37.158 "params": { 00:15:37.158 "enable_ktls": false, 00:15:37.158 "enable_placement_id": 0, 00:15:37.158 "enable_quickack": false, 00:15:37.158 "enable_recv_pipe": true, 00:15:37.158 "enable_zerocopy_send_client": false, 00:15:37.158 "enable_zerocopy_send_server": true, 00:15:37.158 "impl_name": "posix", 00:15:37.158 "recv_buf_size": 2097152, 00:15:37.158 "send_buf_size": 2097152, 00:15:37.158 "tls_version": 0, 00:15:37.158 "zerocopy_threshold": 0 00:15:37.158 } 00:15:37.158 } 00:15:37.158 ] 00:15:37.158 }, 00:15:37.158 { 00:15:37.158 "subsystem": "vmd", 00:15:37.158 "config": [] 00:15:37.158 }, 00:15:37.158 { 00:15:37.158 "subsystem": "accel", 00:15:37.158 "config": [ 00:15:37.158 { 00:15:37.158 "method": "accel_set_options", 00:15:37.158 "params": { 00:15:37.158 "buf_count": 2048, 00:15:37.158 "large_cache_size": 16, 00:15:37.158 "sequence_count": 2048, 00:15:37.158 "small_cache_size": 128, 00:15:37.158 "task_count": 2048 00:15:37.158 } 00:15:37.158 } 00:15:37.158 ] 00:15:37.158 }, 00:15:37.158 { 00:15:37.158 "subsystem": "bdev", 00:15:37.158 "config": [ 00:15:37.158 { 00:15:37.158 "method": "bdev_set_options", 00:15:37.158 "params": { 00:15:37.158 "bdev_auto_examine": true, 00:15:37.158 "bdev_io_cache_size": 256, 00:15:37.158 "bdev_io_pool_size": 65535, 00:15:37.158 "iobuf_large_cache_size": 16, 00:15:37.158 "iobuf_small_cache_size": 128 00:15:37.158 } 00:15:37.158 }, 00:15:37.158 { 00:15:37.158 "method": "bdev_raid_set_options", 00:15:37.158 "params": { 00:15:37.158 "process_window_size_kb": 1024 00:15:37.158 } 00:15:37.158 }, 00:15:37.158 { 00:15:37.158 "method": "bdev_iscsi_set_options", 00:15:37.158 "params": { 00:15:37.158 "timeout_sec": 30 00:15:37.158 } 00:15:37.158 }, 00:15:37.158 { 00:15:37.158 "method": "bdev_nvme_set_options", 00:15:37.158 "params": { 00:15:37.158 "action_on_timeout": "none", 00:15:37.158 "allow_accel_sequence": false, 00:15:37.158 "arbitration_burst": 0, 00:15:37.158 "bdev_retry_count": 3, 00:15:37.158 "ctrlr_loss_timeout_sec": 0, 00:15:37.158 "delay_cmd_submit": true, 00:15:37.158 "dhchap_dhgroups": [ 00:15:37.158 "null", 00:15:37.158 "ffdhe2048", 00:15:37.158 "ffdhe3072", 00:15:37.158 "ffdhe4096", 00:15:37.158 "ffdhe6144", 00:15:37.158 "ffdhe8192" 00:15:37.158 ], 00:15:37.158 "dhchap_digests": [ 00:15:37.158 "sha256", 00:15:37.158 "sha384", 00:15:37.158 "sha512" 00:15:37.158 ], 00:15:37.158 "disable_auto_failback": false, 00:15:37.158 "fast_io_fail_timeout_sec": 0, 00:15:37.158 "generate_uuids": false, 00:15:37.158 "high_priority_weight": 0, 00:15:37.158 "io_path_stat": false, 00:15:37.158 "io_queue_requests": 0, 00:15:37.158 "keep_alive_timeout_ms": 10000, 00:15:37.158 "low_priority_weight": 0, 00:15:37.158 "medium_priority_weight": 0, 00:15:37.158 "nvme_adminq_poll_period_us": 10000, 00:15:37.158 "nvme_error_stat": false, 00:15:37.158 "nvme_ioq_poll_period_us": 0, 00:15:37.158 "rdma_cm_event_timeout_ms": 0, 00:15:37.158 "rdma_max_cq_size": 0, 00:15:37.158 "rdma_srq_size": 0, 00:15:37.158 "reconnect_delay_sec": 0, 00:15:37.158 "timeout_admin_us": 0, 00:15:37.158 "timeout_us": 0, 00:15:37.158 "transport_ack_timeout": 0, 00:15:37.158 "transport_retry_count": 4, 00:15:37.158 "transport_tos": 0 00:15:37.158 } 00:15:37.158 }, 00:15:37.158 { 00:15:37.158 "method": "bdev_nvme_set_hotplug", 00:15:37.158 "params": { 00:15:37.158 "enable": false, 00:15:37.158 "period_us": 100000 00:15:37.158 } 00:15:37.158 }, 00:15:37.158 { 00:15:37.158 "method": "bdev_malloc_create", 00:15:37.158 "params": { 00:15:37.158 "block_size": 4096, 00:15:37.158 "name": "malloc0", 00:15:37.158 "num_blocks": 8192, 00:15:37.158 "optimal_io_boundary": 0, 00:15:37.158 "physical_block_size": 4096, 00:15:37.158 "uuid": "93ef85ab-caf7-4fc5-b940-e6f79aedc122" 00:15:37.158 } 00:15:37.158 }, 00:15:37.158 { 00:15:37.158 "method": "bdev_wait_for_examine" 00:15:37.158 } 00:15:37.158 ] 00:15:37.158 }, 00:15:37.158 { 00:15:37.158 "subsystem": "nbd", 00:15:37.158 "config": [] 00:15:37.158 }, 00:15:37.158 { 00:15:37.158 "subsystem": "scheduler", 00:15:37.158 "config": [ 00:15:37.158 { 00:15:37.158 "method": "framework_set_scheduler", 00:15:37.158 "params": { 00:15:37.159 "name": "static" 00:15:37.159 } 00:15:37.159 } 00:15:37.159 ] 00:15:37.159 }, 00:15:37.159 { 00:15:37.159 "subsystem": "nvmf", 00:15:37.159 "config": [ 00:15:37.159 { 00:15:37.159 "method": "nvmf_set_config", 00:15:37.159 "params": { 00:15:37.159 "admin_cmd_passthru": { 00:15:37.159 "identify_ctrlr": false 00:15:37.159 }, 00:15:37.159 "discovery_filter": "match_any" 00:15:37.159 } 00:15:37.159 }, 00:15:37.159 { 00:15:37.159 "method": "nvmf_set_max_subsystems", 00:15:37.159 "params": { 00:15:37.159 "max_subsystems": 1024 00:15:37.159 } 00:15:37.159 }, 00:15:37.159 { 00:15:37.159 "method": "nvmf_set_crdt", 00:15:37.159 "params": { 00:15:37.159 "crdt1": 0, 00:15:37.159 "crdt2": 0, 00:15:37.159 "crdt3": 0 00:15:37.159 } 00:15:37.159 }, 00:15:37.159 { 00:15:37.159 "method": "nvmf_create_transport", 00:15:37.159 "params": { 00:15:37.159 "abort_timeout_sec": 1, 00:15:37.159 "ack_timeout": 0, 00:15:37.159 "buf_cache_size": 4294967295, 00:15:37.159 "c2h_success": false, 00:15:37.159 "data_wr_pool_size": 0, 00:15:37.159 "dif_insert_or_strip": false, 00:15:37.159 "in_capsule_data_size": 4096, 00:15:37.159 "io_unit_size": 131072, 00:15:37.159 "max_aq_depth": 128, 00:15:37.159 "max_io_qpairs_per_ctrlr": 127, 00:15:37.159 "max_io_size": 131072, 00:15:37.159 "max_queue_depth": 128, 00:15:37.159 "num_shared_buffers": 511, 00:15:37.159 "sock_priority": 0, 00:15:37.159 "trtype": "TCP", 00:15:37.159 "zcopy": false 00:15:37.159 } 00:15:37.159 }, 00:15:37.159 { 00:15:37.159 "method": "nvmf_create_subsystem", 00:15:37.159 "params": { 00:15:37.159 "allow_any_host": false, 00:15:37.159 "ana_reporting": false, 00:15:37.159 "max_cntlid": 65519, 00:15:37.159 "max_namespaces": 10, 00:15:37.159 "min_cntlid": 1, 00:15:37.159 "model_number": "SPDK bdev Controller", 00:15:37.159 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.159 "serial_number": "SPDK00000000000001" 00:15:37.159 } 00:15:37.159 }, 00:15:37.159 { 00:15:37.159 "method": "nvmf_subsystem_add_host", 00:15:37.159 "params": { 00:15:37.159 "host": "nqn.2016-06.io.spdk:host1", 00:15:37.159 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.159 "psk": "/tmp/tmp.2n47uQD22B" 00:15:37.159 } 00:15:37.159 }, 00:15:37.159 { 00:15:37.159 "method": "nvmf_subsystem_add_ns", 00:15:37.159 "params": { 00:15:37.159 "namespace": { 00:15:37.159 "bdev_name": "malloc0", 00:15:37.159 "nguid": "93EF85ABCAF74FC5B940E6F79AEDC122", 00:15:37.159 "no_auto_visible": false, 00:15:37.159 "nsid": 1, 00:15:37.159 "uuid": "93ef85ab-caf7-4fc5-b940-e6f79aedc122" 00:15:37.159 }, 00:15:37.159 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:37.159 } 00:15:37.159 }, 00:15:37.159 { 00:15:37.159 "method": "nvmf_subsystem_add_listener", 00:15:37.159 "params": { 00:15:37.159 "listen_address": { 00:15:37.159 "adrfam": "IPv4", 00:15:37.159 "traddr": "10.0.0.2", 00:15:37.159 "trsvcid": "4420", 00:15:37.159 "trtype": "TCP" 00:15:37.159 }, 00:15:37.159 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.159 "secure_channel": true 00:15:37.159 } 00:15:37.159 } 00:15:37.159 ] 00:15:37.159 } 00:15:37.159 ] 00:15:37.159 }' 00:15:37.159 19:31:26 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:37.418 19:31:27 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:15:37.418 "subsystems": [ 00:15:37.418 { 00:15:37.418 "subsystem": "keyring", 00:15:37.418 "config": [] 00:15:37.418 }, 00:15:37.418 { 00:15:37.418 "subsystem": "iobuf", 00:15:37.418 "config": [ 00:15:37.418 { 00:15:37.418 "method": "iobuf_set_options", 00:15:37.418 "params": { 00:15:37.418 "large_bufsize": 135168, 00:15:37.418 "large_pool_count": 1024, 00:15:37.418 "small_bufsize": 8192, 00:15:37.418 "small_pool_count": 8192 00:15:37.418 } 00:15:37.418 } 00:15:37.418 ] 00:15:37.418 }, 00:15:37.418 { 00:15:37.418 "subsystem": "sock", 00:15:37.418 "config": [ 00:15:37.418 { 00:15:37.418 "method": "sock_set_default_impl", 00:15:37.418 "params": { 00:15:37.418 "impl_name": "posix" 00:15:37.418 } 00:15:37.418 }, 00:15:37.418 { 00:15:37.418 "method": "sock_impl_set_options", 00:15:37.418 "params": { 00:15:37.418 "enable_ktls": false, 00:15:37.418 "enable_placement_id": 0, 00:15:37.418 "enable_quickack": false, 00:15:37.418 "enable_recv_pipe": true, 00:15:37.418 "enable_zerocopy_send_client": false, 00:15:37.418 "enable_zerocopy_send_server": true, 00:15:37.418 "impl_name": "ssl", 00:15:37.418 "recv_buf_size": 4096, 00:15:37.418 "send_buf_size": 4096, 00:15:37.418 "tls_version": 0, 00:15:37.418 "zerocopy_threshold": 0 00:15:37.418 } 00:15:37.418 }, 00:15:37.418 { 00:15:37.418 "method": "sock_impl_set_options", 00:15:37.418 "params": { 00:15:37.418 "enable_ktls": false, 00:15:37.418 "enable_placement_id": 0, 00:15:37.418 "enable_quickack": false, 00:15:37.418 "enable_recv_pipe": true, 00:15:37.418 "enable_zerocopy_send_client": false, 00:15:37.418 "enable_zerocopy_send_server": true, 00:15:37.418 "impl_name": "posix", 00:15:37.418 "recv_buf_size": 2097152, 00:15:37.418 "send_buf_size": 2097152, 00:15:37.418 "tls_version": 0, 00:15:37.418 "zerocopy_threshold": 0 00:15:37.418 } 00:15:37.418 } 00:15:37.418 ] 00:15:37.418 }, 00:15:37.418 { 00:15:37.418 "subsystem": "vmd", 00:15:37.418 "config": [] 00:15:37.418 }, 00:15:37.418 { 00:15:37.418 "subsystem": "accel", 00:15:37.418 "config": [ 00:15:37.418 { 00:15:37.418 "method": "accel_set_options", 00:15:37.418 "params": { 00:15:37.418 "buf_count": 2048, 00:15:37.418 "large_cache_size": 16, 00:15:37.418 "sequence_count": 2048, 00:15:37.418 "small_cache_size": 128, 00:15:37.418 "task_count": 2048 00:15:37.418 } 00:15:37.418 } 00:15:37.418 ] 00:15:37.418 }, 00:15:37.418 { 00:15:37.418 "subsystem": "bdev", 00:15:37.418 "config": [ 00:15:37.418 { 00:15:37.418 "method": "bdev_set_options", 00:15:37.418 "params": { 00:15:37.418 "bdev_auto_examine": true, 00:15:37.418 "bdev_io_cache_size": 256, 00:15:37.418 "bdev_io_pool_size": 65535, 00:15:37.418 "iobuf_large_cache_size": 16, 00:15:37.418 "iobuf_small_cache_size": 128 00:15:37.418 } 00:15:37.418 }, 00:15:37.418 { 00:15:37.418 "method": "bdev_raid_set_options", 00:15:37.418 "params": { 00:15:37.418 "process_window_size_kb": 1024 00:15:37.418 } 00:15:37.418 }, 00:15:37.418 { 00:15:37.418 "method": "bdev_iscsi_set_options", 00:15:37.418 "params": { 00:15:37.418 "timeout_sec": 30 00:15:37.418 } 00:15:37.418 }, 00:15:37.418 { 00:15:37.418 "method": "bdev_nvme_set_options", 00:15:37.418 "params": { 00:15:37.418 "action_on_timeout": "none", 00:15:37.418 "allow_accel_sequence": false, 00:15:37.418 "arbitration_burst": 0, 00:15:37.418 "bdev_retry_count": 3, 00:15:37.418 "ctrlr_loss_timeout_sec": 0, 00:15:37.418 "delay_cmd_submit": true, 00:15:37.418 "dhchap_dhgroups": [ 00:15:37.418 "null", 00:15:37.418 "ffdhe2048", 00:15:37.418 "ffdhe3072", 00:15:37.418 "ffdhe4096", 00:15:37.418 "ffdhe6144", 00:15:37.418 "ffdhe8192" 00:15:37.418 ], 00:15:37.418 "dhchap_digests": [ 00:15:37.418 "sha256", 00:15:37.418 "sha384", 00:15:37.418 "sha512" 00:15:37.418 ], 00:15:37.418 "disable_auto_failback": false, 00:15:37.418 "fast_io_fail_timeout_sec": 0, 00:15:37.418 "generate_uuids": false, 00:15:37.418 "high_priority_weight": 0, 00:15:37.418 "io_path_stat": false, 00:15:37.418 "io_queue_requests": 512, 00:15:37.418 "keep_alive_timeout_ms": 10000, 00:15:37.418 "low_priority_weight": 0, 00:15:37.418 "medium_priority_weight": 0, 00:15:37.418 "nvme_adminq_poll_period_us": 10000, 00:15:37.418 "nvme_error_stat": false, 00:15:37.418 "nvme_ioq_poll_period_us": 0, 00:15:37.418 "rdma_cm_event_timeout_ms": 0, 00:15:37.418 "rdma_max_cq_size": 0, 00:15:37.418 "rdma_srq_size": 0, 00:15:37.418 "reconnect_delay_sec": 0, 00:15:37.418 "timeout_admin_us": 0, 00:15:37.418 "timeout_us": 0, 00:15:37.418 "transport_ack_timeout": 0, 00:15:37.418 "transport_retry_count": 4, 00:15:37.418 "transport_tos": 0 00:15:37.418 } 00:15:37.418 }, 00:15:37.418 { 00:15:37.418 "method": "bdev_nvme_attach_controller", 00:15:37.418 "params": { 00:15:37.418 "adrfam": "IPv4", 00:15:37.418 "ctrlr_loss_timeout_sec": 0, 00:15:37.418 "ddgst": false, 00:15:37.418 "fast_io_fail_timeout_sec": 0, 00:15:37.419 "hdgst": false, 00:15:37.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:37.419 "name": "TLSTEST", 00:15:37.419 "prchk_guard": false, 00:15:37.419 "prchk_reftag": false, 00:15:37.419 "psk": "/tmp/tmp.2n47uQD22B", 00:15:37.419 "reconnect_delay_sec": 0, 00:15:37.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.419 "traddr": "10.0.0.2", 00:15:37.419 "trsvcid": "4420", 00:15:37.419 "trtype": "TCP" 00:15:37.419 } 00:15:37.419 }, 00:15:37.419 { 00:15:37.419 "method": "bdev_nvme_set_hotplug", 00:15:37.419 "params": { 00:15:37.419 "enable": false, 00:15:37.419 "period_us": 100000 00:15:37.419 } 00:15:37.419 }, 00:15:37.419 { 00:15:37.419 "method": "bdev_wait_for_examine" 00:15:37.419 } 00:15:37.419 ] 00:15:37.419 }, 00:15:37.419 { 00:15:37.419 "subsystem": "nbd", 00:15:37.419 "config": [] 00:15:37.419 } 00:15:37.419 ] 00:15:37.419 }' 00:15:37.419 19:31:27 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 84689 00:15:37.419 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84689 ']' 00:15:37.419 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84689 00:15:37.419 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:37.419 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:37.419 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84689 00:15:37.419 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:37.419 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:37.419 killing process with pid 84689 00:15:37.419 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84689' 00:15:37.419 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84689 00:15:37.419 Received shutdown signal, test time was about 10.000000 seconds 00:15:37.419 00:15:37.419 Latency(us) 00:15:37.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.419 =================================================================================================================== 00:15:37.419 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:37.419 [2024-07-15 19:31:27.140789] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:37.419 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84689 00:15:37.677 19:31:27 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 84581 00:15:37.677 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84581 ']' 00:15:37.677 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84581 00:15:37.677 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:37.677 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:37.677 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84581 00:15:37.677 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:37.677 killing process with pid 84581 00:15:37.677 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:37.677 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84581' 00:15:37.677 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84581 00:15:37.677 [2024-07-15 19:31:27.331256] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:37.677 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84581 00:15:37.935 19:31:27 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:37.936 19:31:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:37.936 19:31:27 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:15:37.936 "subsystems": [ 00:15:37.936 { 00:15:37.936 "subsystem": "keyring", 00:15:37.936 "config": [] 00:15:37.936 }, 00:15:37.936 { 00:15:37.936 "subsystem": "iobuf", 00:15:37.936 "config": [ 00:15:37.936 { 00:15:37.936 "method": "iobuf_set_options", 00:15:37.936 "params": { 00:15:37.936 "large_bufsize": 135168, 00:15:37.936 "large_pool_count": 1024, 00:15:37.936 "small_bufsize": 8192, 00:15:37.936 "small_pool_count": 8192 00:15:37.936 } 00:15:37.936 } 00:15:37.936 ] 00:15:37.936 }, 00:15:37.936 { 00:15:37.936 "subsystem": "sock", 00:15:37.936 "config": [ 00:15:37.936 { 00:15:37.936 "method": "sock_set_default_impl", 00:15:37.936 "params": { 00:15:37.936 "impl_name": "posix" 00:15:37.936 } 00:15:37.936 }, 00:15:37.936 { 00:15:37.936 "method": "sock_impl_set_options", 00:15:37.936 "params": { 00:15:37.936 "enable_ktls": false, 00:15:37.936 "enable_placement_id": 0, 00:15:37.936 "enable_quickack": false, 00:15:37.936 "enable_recv_pipe": true, 00:15:37.936 "enable_zerocopy_send_client": false, 00:15:37.936 "enable_zerocopy_send_server": true, 00:15:37.936 "impl_name": "ssl", 00:15:37.936 "recv_buf_size": 4096, 00:15:37.936 "send_buf_size": 4096, 00:15:37.936 "tls_version": 0, 00:15:37.936 "zerocopy_threshold": 0 00:15:37.936 } 00:15:37.936 }, 00:15:37.936 { 00:15:37.936 "method": "sock_impl_set_options", 00:15:37.936 "params": { 00:15:37.936 "enable_ktls": false, 00:15:37.936 "enable_placement_id": 0, 00:15:37.936 "enable_quickack": false, 00:15:37.936 "enable_recv_pipe": true, 00:15:37.936 "enable_zerocopy_send_client": false, 00:15:37.936 "enable_zerocopy_send_server": true, 00:15:37.936 "impl_name": "posix", 00:15:37.936 "recv_buf_size": 2097152, 00:15:37.936 "send_buf_size": 2097152, 00:15:37.936 "tls_version": 0, 00:15:37.936 "zerocopy_threshold": 0 00:15:37.936 } 00:15:37.936 } 00:15:37.936 ] 00:15:37.936 }, 00:15:37.936 { 00:15:37.936 "subsystem": "vmd", 00:15:37.936 "config": [] 00:15:37.936 }, 00:15:37.936 { 00:15:37.936 "subsystem": "accel", 00:15:37.936 "config": [ 00:15:37.936 { 00:15:37.936 "method": "accel_set_options", 00:15:37.936 "params": { 00:15:37.936 "buf_count": 2048, 00:15:37.936 "large_cache_size": 16, 00:15:37.936 "sequence_count": 2048, 00:15:37.936 "small_cache_size": 128, 00:15:37.936 "task_count": 2048 00:15:37.936 } 00:15:37.936 } 00:15:37.936 ] 00:15:37.936 }, 00:15:37.936 { 00:15:37.936 "subsystem": "bdev", 00:15:37.936 "config": [ 00:15:37.936 { 00:15:37.936 "method": "bdev_set_options", 00:15:37.936 "params": { 00:15:37.936 "bdev_auto_examine": true, 00:15:37.936 "bdev_io_cache_size": 256, 00:15:37.936 "bdev_io_pool_size": 65535, 00:15:37.936 "iobuf_large_cache_size": 16, 00:15:37.936 "iobuf_small_cache_size": 128 00:15:37.936 } 00:15:37.936 }, 00:15:37.936 { 00:15:37.936 "method": "bdev_raid_set_options", 00:15:37.936 "params": { 00:15:37.936 "process_window_size_kb": 1024 00:15:37.936 } 00:15:37.936 }, 00:15:37.936 { 00:15:37.936 "method": "bdev_iscsi_set_options", 00:15:37.936 "params": { 00:15:37.936 "timeout_sec": 30 00:15:37.936 } 00:15:37.936 }, 00:15:37.936 { 00:15:37.936 "method": "bdev_nvme_set_options", 00:15:37.936 "params": { 00:15:37.936 "action_on_timeout": "none", 00:15:37.936 "allow_accel_sequence": false, 00:15:37.936 "arbitration_burst": 0, 00:15:37.936 "bdev_retry_count": 3, 00:15:37.936 "ctrlr_loss_timeout_sec": 0, 00:15:37.936 "delay_cmd_submit": true, 00:15:37.936 "dhchap_dhgroups": [ 00:15:37.936 "null", 00:15:37.936 "ffdhe2048", 00:15:37.936 "ffdhe3072", 00:15:37.936 "ffdhe4096", 00:15:37.936 "ffdhe6144", 00:15:37.936 "ffdhe8192" 00:15:37.936 ], 00:15:37.936 "dhchap_digests": [ 00:15:37.936 "sha256", 00:15:37.936 "sha384", 00:15:37.936 "sha512" 00:15:37.936 ], 00:15:37.936 "disable_auto_failback": false, 00:15:37.936 "fast_io_fail_timeout_sec": 0, 00:15:37.936 "generate_uuids": false, 00:15:37.936 "high_priority_weight": 0, 00:15:37.936 "io_path_stat": false, 00:15:37.936 "io_queue_requests": 0, 00:15:37.936 "keep_alive_timeout_ms": 10000, 00:15:37.936 "low_priority_weight": 0, 00:15:37.936 "medium_priority_weight": 0, 00:15:37.936 "nvme_adminq_poll_period_us": 10000, 00:15:37.936 "nvme_error_stat": false, 00:15:37.936 "nvme_ioq_poll_period_us": 0, 00:15:37.936 "rdma_cm_event_timeout_ms": 0, 00:15:37.936 "rdma_max_cq_size": 0, 00:15:37.936 "rdma_srq_size": 0, 00:15:37.936 "reconnect_delay_sec": 0, 00:15:37.936 "timeout_admin_us": 0, 00:15:37.936 "timeout_us": 0, 00:15:37.936 "transport_ack_timeout": 0, 00:15:37.936 "transport_retry_count": 4, 00:15:37.936 "transport_tos": 0 00:15:37.936 } 00:15:37.936 }, 00:15:37.936 { 00:15:37.936 "method": "bdev_nvme_set_hotplug", 00:15:37.936 "params": { 00:15:37.936 "enable": false, 00:15:37.936 "period_us": 100000 00:15:37.936 } 00:15:37.936 }, 00:15:37.936 { 00:15:37.936 "method": "bdev_malloc_create", 00:15:37.936 "params": { 00:15:37.936 "block_size": 4096, 00:15:37.936 "name": "malloc0", 00:15:37.936 "num_blocks": 8192, 00:15:37.936 "optimal_io_boundary": 0, 00:15:37.936 "physical_block_size": 4096, 00:15:37.936 "uuid": "93ef85ab-caf7-4fc5-b940-e6f79aedc122" 00:15:37.936 } 00:15:37.936 }, 00:15:37.936 { 00:15:37.936 "method": "bdev_wait_for_examine" 00:15:37.936 } 00:15:37.936 ] 00:15:37.936 }, 00:15:37.936 { 00:15:37.936 "subsystem": "nbd", 00:15:37.936 "config": [] 00:15:37.936 }, 00:15:37.936 { 00:15:37.936 "subsystem": "scheduler", 00:15:37.936 "config": [ 00:15:37.936 { 00:15:37.936 "method": "framework_set_scheduler", 00:15:37.936 "params": { 00:15:37.936 "name": "static" 00:15:37.936 } 00:15:37.936 } 00:15:37.936 ] 00:15:37.936 }, 00:15:37.936 { 00:15:37.936 "subsystem": "nvmf", 00:15:37.936 "config": [ 00:15:37.936 { 00:15:37.936 "method": "nvmf_set_config", 00:15:37.936 "params": { 00:15:37.936 "admin_cmd_passthru": { 00:15:37.936 "identify_ctrlr": false 00:15:37.936 }, 00:15:37.936 "discovery_filter": "match_any" 00:15:37.936 } 00:15:37.936 }, 00:15:37.936 { 00:15:37.936 "method": "nvmf_set_max_subsystems", 00:15:37.936 "params": { 00:15:37.936 "max_subsystems": 1024 00:15:37.936 } 00:15:37.936 }, 00:15:37.936 { 00:15:37.936 "method": "nvmf_set_crdt", 00:15:37.936 "params": { 00:15:37.936 "crdt1": 0, 00:15:37.936 "crdt2": 0, 00:15:37.936 "crdt3": 0 00:15:37.936 } 00:15:37.936 }, 00:15:37.936 { 00:15:37.936 "method": "nvmf_create_transport", 00:15:37.936 "params": { 00:15:37.936 "abort_timeout_sec": 1, 00:15:37.936 "ack_timeout": 0, 00:15:37.936 "buf_cache_size": 4294967295, 00:15:37.936 "c2h_success": false, 00:15:37.936 "data_wr_pool_size": 0, 00:15:37.936 "dif_insert_or_strip": false, 00:15:37.936 "in_capsule_data_size": 4096, 00:15:37.936 "io_unit_size": 131072, 00:15:37.936 "max_aq_depth": 128, 00:15:37.936 "max_io_qpairs_per_ctrlr": 127, 00:15:37.936 "max_io_size": 131072, 00:15:37.937 "max_queue_depth": 128, 00:15:37.937 "num_shared_buffers": 511, 00:15:37.937 "sock_priority": 0, 00:15:37.937 "trtype": "TCP", 00:15:37.937 "zcopy": false 00:15:37.937 } 00:15:37.937 }, 00:15:37.937 { 00:15:37.937 "method": "nvmf_create_subsystem", 00:15:37.937 "params": { 00:15:37.937 "allow_any_host": false, 00:15:37.937 "ana_reporting": false, 00:15:37.937 "max_cntlid": 65519, 00:15:37.937 "max_namespaces": 10, 00:15:37.937 "min_cntlid": 1, 00:15:37.937 "model_number": "SPDK bdev Controller", 00:15:37.937 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.937 "serial_number": "SPDK00000000000001" 00:15:37.937 } 00:15:37.937 }, 00:15:37.937 { 00:15:37.937 "method": "nvmf_subsystem_add_host", 00:15:37.937 "params": { 00:15:37.937 "host": "nqn.2016-06.io.spdk:host1", 00:15:37.937 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.937 "psk": "/tmp/tmp.2n47uQD22B" 00:15:37.937 } 00:15:37.937 }, 00:15:37.937 { 00:15:37.937 "method": "nvmf_subsystem_add_ns", 00:15:37.937 "params": { 00:15:37.937 "namespace": { 00:15:37.937 "bdev_name": "malloc0", 00:15:37.937 "nguid": "93EF85ABCAF74FC5B940E6F79AEDC122", 00:15:37.937 "no_auto_visible": false, 00:15:37.937 "nsid": 1, 00:15:37.937 "uuid": "93ef85ab-caf7-4fc5-b940-e6f79aedc122" 00:15:37.937 }, 00:15:37.937 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:37.937 } 00:15:37.937 }, 00:15:37.937 { 00:15:37.937 "method": "nvmf_subsystem_add_listener", 00:15:37.937 "params": { 00:15:37.937 "listen_address": { 00:15:37.937 "adrfam": "IPv4", 00:15:37.937 "traddr": "10.0.0.2", 00:15:37.937 "trsvcid": "4420", 00:15:37.937 "trtype": "TCP" 00:15:37.937 }, 00:15:37.937 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.937 "secure_channel": true 00:15:37.937 } 00:15:37.937 } 00:15:37.937 ] 00:15:37.937 } 00:15:37.937 ] 00:15:37.937 }' 00:15:37.937 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:37.937 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.937 19:31:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84761 00:15:37.937 19:31:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:37.937 19:31:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84761 00:15:37.937 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84761 ']' 00:15:37.937 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.937 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.937 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.937 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.937 19:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.937 [2024-07-15 19:31:27.633874] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:37.937 [2024-07-15 19:31:27.634011] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.195 [2024-07-15 19:31:27.778146] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.195 [2024-07-15 19:31:27.838053] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.195 [2024-07-15 19:31:27.838112] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.195 [2024-07-15 19:31:27.838125] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.195 [2024-07-15 19:31:27.838135] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.195 [2024-07-15 19:31:27.838142] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.195 [2024-07-15 19:31:27.838223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.452 [2024-07-15 19:31:28.022622] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.452 [2024-07-15 19:31:28.038542] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:38.452 [2024-07-15 19:31:28.054525] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:38.452 [2024-07-15 19:31:28.054772] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.018 19:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.018 19:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:39.018 19:31:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:39.018 19:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:39.018 19:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:39.018 19:31:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:39.018 19:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=84805 00:15:39.018 19:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 84805 /var/tmp/bdevperf.sock 00:15:39.018 19:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84805 ']' 00:15:39.018 19:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:39.018 19:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:39.018 19:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:39.018 19:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:39.018 19:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:39.018 19:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:39.018 19:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:15:39.018 "subsystems": [ 00:15:39.018 { 00:15:39.018 "subsystem": "keyring", 00:15:39.018 "config": [] 00:15:39.018 }, 00:15:39.018 { 00:15:39.018 "subsystem": "iobuf", 00:15:39.018 "config": [ 00:15:39.018 { 00:15:39.018 "method": "iobuf_set_options", 00:15:39.018 "params": { 00:15:39.018 "large_bufsize": 135168, 00:15:39.018 "large_pool_count": 1024, 00:15:39.018 "small_bufsize": 8192, 00:15:39.018 "small_pool_count": 8192 00:15:39.018 } 00:15:39.019 } 00:15:39.019 ] 00:15:39.019 }, 00:15:39.019 { 00:15:39.019 "subsystem": "sock", 00:15:39.019 "config": [ 00:15:39.019 { 00:15:39.019 "method": "sock_set_default_impl", 00:15:39.019 "params": { 00:15:39.019 "impl_name": "posix" 00:15:39.019 } 00:15:39.019 }, 00:15:39.019 { 00:15:39.019 "method": "sock_impl_set_options", 00:15:39.019 "params": { 00:15:39.019 "enable_ktls": false, 00:15:39.019 "enable_placement_id": 0, 00:15:39.019 "enable_quickack": false, 00:15:39.019 "enable_recv_pipe": true, 00:15:39.019 "enable_zerocopy_send_client": false, 00:15:39.019 "enable_zerocopy_send_server": true, 00:15:39.019 "impl_name": "ssl", 00:15:39.019 "recv_buf_size": 4096, 00:15:39.019 "send_buf_size": 4096, 00:15:39.019 "tls_version": 0, 00:15:39.019 "zerocopy_threshold": 0 00:15:39.019 } 00:15:39.019 }, 00:15:39.019 { 00:15:39.019 "method": "sock_impl_set_options", 00:15:39.019 "params": { 00:15:39.019 "enable_ktls": false, 00:15:39.019 "enable_placement_id": 0, 00:15:39.019 "enable_quickack": false, 00:15:39.019 "enable_recv_pipe": true, 00:15:39.019 "enable_zerocopy_send_client": false, 00:15:39.019 "enable_zerocopy_send_server": true, 00:15:39.019 "impl_name": "posix", 00:15:39.019 "recv_buf_size": 2097152, 00:15:39.019 "send_buf_size": 2097152, 00:15:39.019 "tls_version": 0, 00:15:39.019 "zerocopy_threshold": 0 00:15:39.019 } 00:15:39.019 } 00:15:39.019 ] 00:15:39.019 }, 00:15:39.019 { 00:15:39.019 "subsystem": "vmd", 00:15:39.019 "config": [] 00:15:39.019 }, 00:15:39.019 { 00:15:39.019 "subsystem": "accel", 00:15:39.019 "config": [ 00:15:39.019 { 00:15:39.019 "method": "accel_set_options", 00:15:39.019 "params": { 00:15:39.019 "buf_count": 2048, 00:15:39.019 "large_cache_size": 16, 00:15:39.019 "sequence_count": 2048, 00:15:39.019 "small_cache_size": 128, 00:15:39.019 "task_count": 2048 00:15:39.019 } 00:15:39.019 } 00:15:39.019 ] 00:15:39.019 }, 00:15:39.019 { 00:15:39.019 "subsystem": "bdev", 00:15:39.019 "config": [ 00:15:39.019 { 00:15:39.019 "method": "bdev_set_options", 00:15:39.019 "params": { 00:15:39.019 "bdev_auto_examine": true, 00:15:39.019 "bdev_io_cache_size": 256, 00:15:39.019 "bdev_io_pool_size": 65535, 00:15:39.019 "iobuf_large_cache_size": 16, 00:15:39.019 "iobuf_small_cache_size": 128 00:15:39.019 } 00:15:39.019 }, 00:15:39.019 { 00:15:39.019 "method": "bdev_raid_set_options", 00:15:39.019 "params": { 00:15:39.019 "process_window_size_kb": 1024 00:15:39.019 } 00:15:39.019 }, 00:15:39.019 { 00:15:39.019 "method": "bdev_iscsi_set_options", 00:15:39.019 "params": { 00:15:39.019 "timeout_sec": 30 00:15:39.019 } 00:15:39.019 }, 00:15:39.019 { 00:15:39.019 "method": "bdev_nvme_set_options", 00:15:39.019 "params": { 00:15:39.019 "action_on_timeout": "none", 00:15:39.019 "allow_accel_sequence": false, 00:15:39.019 "arbitration_burst": 0, 00:15:39.019 "bdev_retry_count": 3, 00:15:39.019 "ctrlr_loss_timeout_sec": 0, 00:15:39.019 "delay_cmd_submit": true, 00:15:39.019 "dhchap_dhgroups": [ 00:15:39.019 "null", 00:15:39.019 "ffdhe2048", 00:15:39.019 "ffdhe3072", 00:15:39.019 "ffdhe4096", 00:15:39.019 "ffdhe6144", 00:15:39.019 "ffdhe8192" 00:15:39.019 ], 00:15:39.019 "dhchap_digests": [ 00:15:39.019 "sha256", 00:15:39.019 "sha384", 00:15:39.019 "sha512" 00:15:39.019 ], 00:15:39.019 "disable_auto_failback": false, 00:15:39.019 "fast_io_fail_timeout_sec": 0, 00:15:39.019 "generate_uuids": false, 00:15:39.019 "high_priority_weight": 0, 00:15:39.019 "io_path_stat": false, 00:15:39.019 "io_queue_requests": 512, 00:15:39.019 "keep_alive_timeout_ms": 10000, 00:15:39.019 "low_priority_weight": 0, 00:15:39.019 "medium_priority_weight": 0, 00:15:39.019 "nvme_adminq_poll_period_us": 10000, 00:15:39.019 "nvme_error_stat": false, 00:15:39.019 "nvme_ioq_poll_period_us": 0, 00:15:39.019 "rdma_cm_event_timeout_ms": 0, 00:15:39.019 "rdma_max_cq_size": 0, 00:15:39.019 "rdma_srq_size": 0, 00:15:39.019 "reconnect_delay_sec": 0, 00:15:39.019 "timeout_admin_us": 0, 00:15:39.019 "timeout_us": 0, 00:15:39.019 "transport_ack_timeout": 0, 00:15:39.019 "transport_retry_count": 4, 00:15:39.019 "transport_tos": 0 00:15:39.019 } 00:15:39.019 }, 00:15:39.019 { 00:15:39.019 "method": "bdev_nvme_attach_controller", 00:15:39.019 "params": { 00:15:39.019 "adrfam": "IPv4", 00:15:39.019 "ctrlr_loss_timeout_sec": 0, 00:15:39.019 "ddgst": false, 00:15:39.019 "fast_io_fail_timeout_sec": 0, 00:15:39.019 "hdgst": false, 00:15:39.019 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:39.019 "name": "TLSTEST", 00:15:39.019 "prchk_guard": false, 00:15:39.019 "prchk_reftag": false, 00:15:39.019 "psk": "/tmp/tmp.2n47uQD22B", 00:15:39.019 "reconnect_delay_sec": 0, 00:15:39.019 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.019 "traddr": "10.0.0.2", 00:15:39.019 "trsvcid": "4420", 00:15:39.019 "trtype": "TCP" 00:15:39.019 } 00:15:39.019 }, 00:15:39.019 { 00:15:39.019 "method": "bdev_nvme_set_hotplug", 00:15:39.019 "params": { 00:15:39.019 "enable": false, 00:15:39.019 "period_us": 100000 00:15:39.019 } 00:15:39.019 }, 00:15:39.019 { 00:15:39.019 "method": "bdev_wait_for_examine" 00:15:39.019 } 00:15:39.019 ] 00:15:39.019 }, 00:15:39.019 { 00:15:39.019 "subsystem": "nbd", 00:15:39.019 "config": [] 00:15:39.019 } 00:15:39.019 ] 00:15:39.019 }' 00:15:39.019 [2024-07-15 19:31:28.759174] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:39.019 [2024-07-15 19:31:28.759267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84805 ] 00:15:39.277 [2024-07-15 19:31:28.893758] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.277 [2024-07-15 19:31:28.952212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.277 [2024-07-15 19:31:29.076075] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:39.277 [2024-07-15 19:31:29.076194] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:40.212 19:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:40.212 19:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:40.212 19:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:40.212 Running I/O for 10 seconds... 00:15:52.402 00:15:52.402 Latency(us) 00:15:52.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.402 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:52.402 Verification LBA range: start 0x0 length 0x2000 00:15:52.402 TLSTESTn1 : 10.04 3176.41 12.41 0.00 0.00 40187.91 9472.93 37415.10 00:15:52.402 =================================================================================================================== 00:15:52.402 Total : 3176.41 12.41 0.00 0.00 40187.91 9472.93 37415.10 00:15:52.402 0 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 84805 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84805 ']' 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84805 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84805 00:15:52.402 killing process with pid 84805 00:15:52.402 Received shutdown signal, test time was about 10.000000 seconds 00:15:52.402 00:15:52.402 Latency(us) 00:15:52.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.402 =================================================================================================================== 00:15:52.402 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84805' 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84805 00:15:52.402 [2024-07-15 19:31:40.028307] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84805 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 84761 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84761 ']' 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84761 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84761 00:15:52.402 killing process with pid 84761 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84761' 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84761 00:15:52.402 [2024-07-15 19:31:40.264739] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84761 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84947 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84947 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84947 ']' 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:52.402 [2024-07-15 19:31:40.545667] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:52.402 [2024-07-15 19:31:40.545805] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.402 [2024-07-15 19:31:40.687382] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.402 [2024-07-15 19:31:40.747724] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.402 [2024-07-15 19:31:40.747789] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.402 [2024-07-15 19:31:40.747802] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.402 [2024-07-15 19:31:40.747810] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.402 [2024-07-15 19:31:40.747818] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.402 [2024-07-15 19:31:40.747845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.2n47uQD22B 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.2n47uQD22B 00:15:52.402 19:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:52.402 [2024-07-15 19:31:41.157293] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.402 19:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:52.403 19:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:52.403 [2024-07-15 19:31:41.809407] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:52.403 [2024-07-15 19:31:41.809651] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.403 19:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:52.403 malloc0 00:15:52.403 19:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:52.660 19:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2n47uQD22B 00:15:52.919 [2024-07-15 19:31:42.680834] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:52.919 19:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=85036 00:15:52.919 19:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:52.919 19:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:52.919 19:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 85036 /var/tmp/bdevperf.sock 00:15:52.919 19:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85036 ']' 00:15:52.919 19:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:52.919 19:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:52.919 19:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:52.919 19:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.919 19:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:53.177 [2024-07-15 19:31:42.780375] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:53.177 [2024-07-15 19:31:42.780520] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85036 ] 00:15:53.177 [2024-07-15 19:31:42.920582] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.177 [2024-07-15 19:31:42.980179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.436 19:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.436 19:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:53.436 19:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2n47uQD22B 00:15:53.694 19:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:53.953 [2024-07-15 19:31:43.674619] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:53.953 nvme0n1 00:15:54.211 19:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:54.211 Running I/O for 1 seconds... 00:15:55.175 00:15:55.175 Latency(us) 00:15:55.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.175 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:55.175 Verification LBA range: start 0x0 length 0x2000 00:15:55.175 nvme0n1 : 1.02 3263.93 12.75 0.00 0.00 38762.80 7477.06 37891.72 00:15:55.175 =================================================================================================================== 00:15:55.175 Total : 3263.93 12.75 0.00 0.00 38762.80 7477.06 37891.72 00:15:55.175 0 00:15:55.175 19:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 85036 00:15:55.175 19:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85036 ']' 00:15:55.175 19:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85036 00:15:55.175 19:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:55.175 19:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:55.175 19:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85036 00:15:55.175 killing process with pid 85036 00:15:55.175 Received shutdown signal, test time was about 1.000000 seconds 00:15:55.175 00:15:55.175 Latency(us) 00:15:55.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.175 =================================================================================================================== 00:15:55.175 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:55.175 19:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:55.175 19:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:55.175 19:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85036' 00:15:55.175 19:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85036 00:15:55.175 19:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85036 00:15:55.434 19:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 84947 00:15:55.434 19:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84947 ']' 00:15:55.434 19:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84947 00:15:55.434 19:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:55.434 19:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:55.434 19:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84947 00:15:55.434 19:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:55.434 19:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:55.434 19:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84947' 00:15:55.434 killing process with pid 84947 00:15:55.434 19:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84947 00:15:55.434 [2024-07-15 19:31:45.163111] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:55.434 19:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84947 00:15:55.692 19:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:15:55.692 19:31:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:55.692 19:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:55.692 19:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:55.692 19:31:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85099 00:15:55.692 19:31:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:55.692 19:31:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85099 00:15:55.692 19:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85099 ']' 00:15:55.692 19:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.692 19:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.692 19:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.692 19:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.692 19:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:55.692 [2024-07-15 19:31:45.403063] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:55.692 [2024-07-15 19:31:45.403161] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.950 [2024-07-15 19:31:45.534846] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.950 [2024-07-15 19:31:45.621866] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.950 [2024-07-15 19:31:45.621956] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.950 [2024-07-15 19:31:45.621973] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.950 [2024-07-15 19:31:45.621986] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.950 [2024-07-15 19:31:45.622000] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.950 [2024-07-15 19:31:45.622043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.887 19:31:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.887 19:31:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:56.887 19:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:56.887 19:31:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:56.887 19:31:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:56.887 19:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.887 19:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:15:56.887 19:31:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.887 19:31:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:56.887 [2024-07-15 19:31:46.514643] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.887 malloc0 00:15:56.887 [2024-07-15 19:31:46.543812] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:56.887 [2024-07-15 19:31:46.544150] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.887 19:31:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.887 19:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=85149 00:15:56.887 19:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:56.887 19:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 85149 /var/tmp/bdevperf.sock 00:15:56.887 19:31:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85149 ']' 00:15:56.887 19:31:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:56.887 19:31:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:56.887 19:31:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:56.887 19:31:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.887 19:31:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:56.887 [2024-07-15 19:31:46.643872] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:56.887 [2024-07-15 19:31:46.644019] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85149 ] 00:15:57.157 [2024-07-15 19:31:46.786235] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.157 [2024-07-15 19:31:46.878759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.459 19:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:57.459 19:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:57.459 19:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2n47uQD22B 00:15:57.719 19:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:58.286 [2024-07-15 19:31:47.794688] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:58.286 nvme0n1 00:15:58.286 19:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:58.286 Running I/O for 1 seconds... 00:15:59.660 00:15:59.660 Latency(us) 00:15:59.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.660 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:59.660 Verification LBA range: start 0x0 length 0x2000 00:15:59.660 nvme0n1 : 1.02 2742.87 10.71 0.00 0.00 46073.96 7149.38 41943.04 00:15:59.660 =================================================================================================================== 00:15:59.660 Total : 2742.87 10.71 0.00 0.00 46073.96 7149.38 41943.04 00:15:59.660 0 00:15:59.660 19:31:49 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:15:59.660 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.661 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:59.661 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.661 19:31:49 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:15:59.661 "subsystems": [ 00:15:59.661 { 00:15:59.661 "subsystem": "keyring", 00:15:59.661 "config": [ 00:15:59.661 { 00:15:59.661 "method": "keyring_file_add_key", 00:15:59.661 "params": { 00:15:59.661 "name": "key0", 00:15:59.661 "path": "/tmp/tmp.2n47uQD22B" 00:15:59.661 } 00:15:59.661 } 00:15:59.661 ] 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "subsystem": "iobuf", 00:15:59.661 "config": [ 00:15:59.661 { 00:15:59.661 "method": "iobuf_set_options", 00:15:59.661 "params": { 00:15:59.661 "large_bufsize": 135168, 00:15:59.661 "large_pool_count": 1024, 00:15:59.661 "small_bufsize": 8192, 00:15:59.661 "small_pool_count": 8192 00:15:59.661 } 00:15:59.661 } 00:15:59.661 ] 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "subsystem": "sock", 00:15:59.661 "config": [ 00:15:59.661 { 00:15:59.661 "method": "sock_set_default_impl", 00:15:59.661 "params": { 00:15:59.661 "impl_name": "posix" 00:15:59.661 } 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "method": "sock_impl_set_options", 00:15:59.661 "params": { 00:15:59.661 "enable_ktls": false, 00:15:59.661 "enable_placement_id": 0, 00:15:59.661 "enable_quickack": false, 00:15:59.661 "enable_recv_pipe": true, 00:15:59.661 "enable_zerocopy_send_client": false, 00:15:59.661 "enable_zerocopy_send_server": true, 00:15:59.661 "impl_name": "ssl", 00:15:59.661 "recv_buf_size": 4096, 00:15:59.661 "send_buf_size": 4096, 00:15:59.661 "tls_version": 0, 00:15:59.661 "zerocopy_threshold": 0 00:15:59.661 } 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "method": "sock_impl_set_options", 00:15:59.661 "params": { 00:15:59.661 "enable_ktls": false, 00:15:59.661 "enable_placement_id": 0, 00:15:59.661 "enable_quickack": false, 00:15:59.661 "enable_recv_pipe": true, 00:15:59.661 "enable_zerocopy_send_client": false, 00:15:59.661 "enable_zerocopy_send_server": true, 00:15:59.661 "impl_name": "posix", 00:15:59.661 "recv_buf_size": 2097152, 00:15:59.661 "send_buf_size": 2097152, 00:15:59.661 "tls_version": 0, 00:15:59.661 "zerocopy_threshold": 0 00:15:59.661 } 00:15:59.661 } 00:15:59.661 ] 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "subsystem": "vmd", 00:15:59.661 "config": [] 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "subsystem": "accel", 00:15:59.661 "config": [ 00:15:59.661 { 00:15:59.661 "method": "accel_set_options", 00:15:59.661 "params": { 00:15:59.661 "buf_count": 2048, 00:15:59.661 "large_cache_size": 16, 00:15:59.661 "sequence_count": 2048, 00:15:59.661 "small_cache_size": 128, 00:15:59.661 "task_count": 2048 00:15:59.661 } 00:15:59.661 } 00:15:59.661 ] 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "subsystem": "bdev", 00:15:59.661 "config": [ 00:15:59.661 { 00:15:59.661 "method": "bdev_set_options", 00:15:59.661 "params": { 00:15:59.661 "bdev_auto_examine": true, 00:15:59.661 "bdev_io_cache_size": 256, 00:15:59.661 "bdev_io_pool_size": 65535, 00:15:59.661 "iobuf_large_cache_size": 16, 00:15:59.661 "iobuf_small_cache_size": 128 00:15:59.661 } 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "method": "bdev_raid_set_options", 00:15:59.661 "params": { 00:15:59.661 "process_window_size_kb": 1024 00:15:59.661 } 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "method": "bdev_iscsi_set_options", 00:15:59.661 "params": { 00:15:59.661 "timeout_sec": 30 00:15:59.661 } 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "method": "bdev_nvme_set_options", 00:15:59.661 "params": { 00:15:59.661 "action_on_timeout": "none", 00:15:59.661 "allow_accel_sequence": false, 00:15:59.661 "arbitration_burst": 0, 00:15:59.661 "bdev_retry_count": 3, 00:15:59.661 "ctrlr_loss_timeout_sec": 0, 00:15:59.661 "delay_cmd_submit": true, 00:15:59.661 "dhchap_dhgroups": [ 00:15:59.661 "null", 00:15:59.661 "ffdhe2048", 00:15:59.661 "ffdhe3072", 00:15:59.661 "ffdhe4096", 00:15:59.661 "ffdhe6144", 00:15:59.661 "ffdhe8192" 00:15:59.661 ], 00:15:59.661 "dhchap_digests": [ 00:15:59.661 "sha256", 00:15:59.661 "sha384", 00:15:59.661 "sha512" 00:15:59.661 ], 00:15:59.661 "disable_auto_failback": false, 00:15:59.661 "fast_io_fail_timeout_sec": 0, 00:15:59.661 "generate_uuids": false, 00:15:59.661 "high_priority_weight": 0, 00:15:59.661 "io_path_stat": false, 00:15:59.661 "io_queue_requests": 0, 00:15:59.661 "keep_alive_timeout_ms": 10000, 00:15:59.661 "low_priority_weight": 0, 00:15:59.661 "medium_priority_weight": 0, 00:15:59.661 "nvme_adminq_poll_period_us": 10000, 00:15:59.661 "nvme_error_stat": false, 00:15:59.661 "nvme_ioq_poll_period_us": 0, 00:15:59.661 "rdma_cm_event_timeout_ms": 0, 00:15:59.661 "rdma_max_cq_size": 0, 00:15:59.661 "rdma_srq_size": 0, 00:15:59.661 "reconnect_delay_sec": 0, 00:15:59.661 "timeout_admin_us": 0, 00:15:59.661 "timeout_us": 0, 00:15:59.661 "transport_ack_timeout": 0, 00:15:59.661 "transport_retry_count": 4, 00:15:59.661 "transport_tos": 0 00:15:59.661 } 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "method": "bdev_nvme_set_hotplug", 00:15:59.661 "params": { 00:15:59.661 "enable": false, 00:15:59.661 "period_us": 100000 00:15:59.661 } 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "method": "bdev_malloc_create", 00:15:59.661 "params": { 00:15:59.661 "block_size": 4096, 00:15:59.661 "name": "malloc0", 00:15:59.661 "num_blocks": 8192, 00:15:59.661 "optimal_io_boundary": 0, 00:15:59.661 "physical_block_size": 4096, 00:15:59.661 "uuid": "fc6f382c-48c5-4728-b937-f51c33ed1c66" 00:15:59.661 } 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "method": "bdev_wait_for_examine" 00:15:59.661 } 00:15:59.661 ] 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "subsystem": "nbd", 00:15:59.661 "config": [] 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "subsystem": "scheduler", 00:15:59.661 "config": [ 00:15:59.661 { 00:15:59.661 "method": "framework_set_scheduler", 00:15:59.661 "params": { 00:15:59.661 "name": "static" 00:15:59.661 } 00:15:59.661 } 00:15:59.661 ] 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "subsystem": "nvmf", 00:15:59.661 "config": [ 00:15:59.661 { 00:15:59.661 "method": "nvmf_set_config", 00:15:59.661 "params": { 00:15:59.661 "admin_cmd_passthru": { 00:15:59.661 "identify_ctrlr": false 00:15:59.661 }, 00:15:59.661 "discovery_filter": "match_any" 00:15:59.661 } 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "method": "nvmf_set_max_subsystems", 00:15:59.661 "params": { 00:15:59.661 "max_subsystems": 1024 00:15:59.661 } 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "method": "nvmf_set_crdt", 00:15:59.661 "params": { 00:15:59.661 "crdt1": 0, 00:15:59.661 "crdt2": 0, 00:15:59.661 "crdt3": 0 00:15:59.661 } 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "method": "nvmf_create_transport", 00:15:59.661 "params": { 00:15:59.661 "abort_timeout_sec": 1, 00:15:59.661 "ack_timeout": 0, 00:15:59.661 "buf_cache_size": 4294967295, 00:15:59.661 "c2h_success": false, 00:15:59.661 "data_wr_pool_size": 0, 00:15:59.661 "dif_insert_or_strip": false, 00:15:59.661 "in_capsule_data_size": 4096, 00:15:59.661 "io_unit_size": 131072, 00:15:59.661 "max_aq_depth": 128, 00:15:59.661 "max_io_qpairs_per_ctrlr": 127, 00:15:59.661 "max_io_size": 131072, 00:15:59.661 "max_queue_depth": 128, 00:15:59.661 "num_shared_buffers": 511, 00:15:59.661 "sock_priority": 0, 00:15:59.661 "trtype": "TCP", 00:15:59.661 "zcopy": false 00:15:59.661 } 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "method": "nvmf_create_subsystem", 00:15:59.661 "params": { 00:15:59.661 "allow_any_host": false, 00:15:59.661 "ana_reporting": false, 00:15:59.661 "max_cntlid": 65519, 00:15:59.661 "max_namespaces": 32, 00:15:59.661 "min_cntlid": 1, 00:15:59.661 "model_number": "SPDK bdev Controller", 00:15:59.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.661 "serial_number": "00000000000000000000" 00:15:59.661 } 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "method": "nvmf_subsystem_add_host", 00:15:59.661 "params": { 00:15:59.661 "host": "nqn.2016-06.io.spdk:host1", 00:15:59.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.661 "psk": "key0" 00:15:59.661 } 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "method": "nvmf_subsystem_add_ns", 00:15:59.661 "params": { 00:15:59.661 "namespace": { 00:15:59.661 "bdev_name": "malloc0", 00:15:59.661 "nguid": "FC6F382C48C54728B937F51C33ED1C66", 00:15:59.661 "no_auto_visible": false, 00:15:59.661 "nsid": 1, 00:15:59.661 "uuid": "fc6f382c-48c5-4728-b937-f51c33ed1c66" 00:15:59.661 }, 00:15:59.661 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:59.661 } 00:15:59.661 }, 00:15:59.661 { 00:15:59.661 "method": "nvmf_subsystem_add_listener", 00:15:59.661 "params": { 00:15:59.661 "listen_address": { 00:15:59.661 "adrfam": "IPv4", 00:15:59.661 "traddr": "10.0.0.2", 00:15:59.661 "trsvcid": "4420", 00:15:59.661 "trtype": "TCP" 00:15:59.661 }, 00:15:59.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.662 "secure_channel": false, 00:15:59.662 "sock_impl": "ssl" 00:15:59.662 } 00:15:59.662 } 00:15:59.662 ] 00:15:59.662 } 00:15:59.662 ] 00:15:59.662 }' 00:15:59.662 19:31:49 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:59.920 19:31:49 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:15:59.920 "subsystems": [ 00:15:59.920 { 00:15:59.920 "subsystem": "keyring", 00:15:59.920 "config": [ 00:15:59.920 { 00:15:59.920 "method": "keyring_file_add_key", 00:15:59.920 "params": { 00:15:59.920 "name": "key0", 00:15:59.920 "path": "/tmp/tmp.2n47uQD22B" 00:15:59.920 } 00:15:59.920 } 00:15:59.920 ] 00:15:59.920 }, 00:15:59.920 { 00:15:59.920 "subsystem": "iobuf", 00:15:59.920 "config": [ 00:15:59.920 { 00:15:59.920 "method": "iobuf_set_options", 00:15:59.920 "params": { 00:15:59.920 "large_bufsize": 135168, 00:15:59.920 "large_pool_count": 1024, 00:15:59.920 "small_bufsize": 8192, 00:15:59.920 "small_pool_count": 8192 00:15:59.920 } 00:15:59.920 } 00:15:59.920 ] 00:15:59.920 }, 00:15:59.920 { 00:15:59.920 "subsystem": "sock", 00:15:59.920 "config": [ 00:15:59.920 { 00:15:59.920 "method": "sock_set_default_impl", 00:15:59.920 "params": { 00:15:59.920 "impl_name": "posix" 00:15:59.920 } 00:15:59.920 }, 00:15:59.920 { 00:15:59.920 "method": "sock_impl_set_options", 00:15:59.920 "params": { 00:15:59.920 "enable_ktls": false, 00:15:59.920 "enable_placement_id": 0, 00:15:59.920 "enable_quickack": false, 00:15:59.920 "enable_recv_pipe": true, 00:15:59.920 "enable_zerocopy_send_client": false, 00:15:59.920 "enable_zerocopy_send_server": true, 00:15:59.920 "impl_name": "ssl", 00:15:59.920 "recv_buf_size": 4096, 00:15:59.920 "send_buf_size": 4096, 00:15:59.920 "tls_version": 0, 00:15:59.920 "zerocopy_threshold": 0 00:15:59.920 } 00:15:59.920 }, 00:15:59.920 { 00:15:59.920 "method": "sock_impl_set_options", 00:15:59.920 "params": { 00:15:59.920 "enable_ktls": false, 00:15:59.920 "enable_placement_id": 0, 00:15:59.920 "enable_quickack": false, 00:15:59.920 "enable_recv_pipe": true, 00:15:59.920 "enable_zerocopy_send_client": false, 00:15:59.920 "enable_zerocopy_send_server": true, 00:15:59.920 "impl_name": "posix", 00:15:59.920 "recv_buf_size": 2097152, 00:15:59.920 "send_buf_size": 2097152, 00:15:59.920 "tls_version": 0, 00:15:59.920 "zerocopy_threshold": 0 00:15:59.920 } 00:15:59.920 } 00:15:59.920 ] 00:15:59.920 }, 00:15:59.920 { 00:15:59.920 "subsystem": "vmd", 00:15:59.920 "config": [] 00:15:59.920 }, 00:15:59.920 { 00:15:59.920 "subsystem": "accel", 00:15:59.920 "config": [ 00:15:59.920 { 00:15:59.920 "method": "accel_set_options", 00:15:59.920 "params": { 00:15:59.920 "buf_count": 2048, 00:15:59.920 "large_cache_size": 16, 00:15:59.920 "sequence_count": 2048, 00:15:59.920 "small_cache_size": 128, 00:15:59.920 "task_count": 2048 00:15:59.920 } 00:15:59.920 } 00:15:59.920 ] 00:15:59.920 }, 00:15:59.920 { 00:15:59.920 "subsystem": "bdev", 00:15:59.920 "config": [ 00:15:59.920 { 00:15:59.920 "method": "bdev_set_options", 00:15:59.920 "params": { 00:15:59.920 "bdev_auto_examine": true, 00:15:59.920 "bdev_io_cache_size": 256, 00:15:59.920 "bdev_io_pool_size": 65535, 00:15:59.920 "iobuf_large_cache_size": 16, 00:15:59.920 "iobuf_small_cache_size": 128 00:15:59.920 } 00:15:59.920 }, 00:15:59.920 { 00:15:59.920 "method": "bdev_raid_set_options", 00:15:59.920 "params": { 00:15:59.920 "process_window_size_kb": 1024 00:15:59.920 } 00:15:59.920 }, 00:15:59.921 { 00:15:59.921 "method": "bdev_iscsi_set_options", 00:15:59.921 "params": { 00:15:59.921 "timeout_sec": 30 00:15:59.921 } 00:15:59.921 }, 00:15:59.921 { 00:15:59.921 "method": "bdev_nvme_set_options", 00:15:59.921 "params": { 00:15:59.921 "action_on_timeout": "none", 00:15:59.921 "allow_accel_sequence": false, 00:15:59.921 "arbitration_burst": 0, 00:15:59.921 "bdev_retry_count": 3, 00:15:59.921 "ctrlr_loss_timeout_sec": 0, 00:15:59.921 "delay_cmd_submit": true, 00:15:59.921 "dhchap_dhgroups": [ 00:15:59.921 "null", 00:15:59.921 "ffdhe2048", 00:15:59.921 "ffdhe3072", 00:15:59.921 "ffdhe4096", 00:15:59.921 "ffdhe6144", 00:15:59.921 "ffdhe8192" 00:15:59.921 ], 00:15:59.921 "dhchap_digests": [ 00:15:59.921 "sha256", 00:15:59.921 "sha384", 00:15:59.921 "sha512" 00:15:59.921 ], 00:15:59.921 "disable_auto_failback": false, 00:15:59.921 "fast_io_fail_timeout_sec": 0, 00:15:59.921 "generate_uuids": false, 00:15:59.921 "high_priority_weight": 0, 00:15:59.921 "io_path_stat": false, 00:15:59.921 "io_queue_requests": 512, 00:15:59.921 "keep_alive_timeout_ms": 10000, 00:15:59.921 "low_priority_weight": 0, 00:15:59.921 "medium_priority_weight": 0, 00:15:59.921 "nvme_adminq_poll_period_us": 10000, 00:15:59.921 "nvme_error_stat": false, 00:15:59.921 "nvme_ioq_poll_period_us": 0, 00:15:59.921 "rdma_cm_event_timeout_ms": 0, 00:15:59.921 "rdma_max_cq_size": 0, 00:15:59.921 "rdma_srq_size": 0, 00:15:59.921 "reconnect_delay_sec": 0, 00:15:59.921 "timeout_admin_us": 0, 00:15:59.921 "timeout_us": 0, 00:15:59.921 "transport_ack_timeout": 0, 00:15:59.921 "transport_retry_count": 4, 00:15:59.921 "transport_tos": 0 00:15:59.921 } 00:15:59.921 }, 00:15:59.921 { 00:15:59.921 "method": "bdev_nvme_attach_controller", 00:15:59.921 "params": { 00:15:59.921 "adrfam": "IPv4", 00:15:59.921 "ctrlr_loss_timeout_sec": 0, 00:15:59.921 "ddgst": false, 00:15:59.921 "fast_io_fail_timeout_sec": 0, 00:15:59.921 "hdgst": false, 00:15:59.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:59.921 "name": "nvme0", 00:15:59.921 "prchk_guard": false, 00:15:59.921 "prchk_reftag": false, 00:15:59.921 "psk": "key0", 00:15:59.921 "reconnect_delay_sec": 0, 00:15:59.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.921 "traddr": "10.0.0.2", 00:15:59.921 "trsvcid": "4420", 00:15:59.921 "trtype": "TCP" 00:15:59.921 } 00:15:59.921 }, 00:15:59.921 { 00:15:59.921 "method": "bdev_nvme_set_hotplug", 00:15:59.921 "params": { 00:15:59.921 "enable": false, 00:15:59.921 "period_us": 100000 00:15:59.921 } 00:15:59.921 }, 00:15:59.921 { 00:15:59.921 "method": "bdev_enable_histogram", 00:15:59.921 "params": { 00:15:59.921 "enable": true, 00:15:59.921 "name": "nvme0n1" 00:15:59.921 } 00:15:59.921 }, 00:15:59.921 { 00:15:59.921 "method": "bdev_wait_for_examine" 00:15:59.921 } 00:15:59.921 ] 00:15:59.921 }, 00:15:59.921 { 00:15:59.921 "subsystem": "nbd", 00:15:59.921 "config": [] 00:15:59.921 } 00:15:59.921 ] 00:15:59.921 }' 00:15:59.921 19:31:49 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 85149 00:15:59.921 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85149 ']' 00:15:59.921 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85149 00:15:59.921 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:59.921 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:59.921 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85149 00:15:59.921 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:59.921 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:59.921 killing process with pid 85149 00:15:59.921 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85149' 00:15:59.921 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85149 00:15:59.921 Received shutdown signal, test time was about 1.000000 seconds 00:15:59.921 00:15:59.921 Latency(us) 00:15:59.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.921 =================================================================================================================== 00:15:59.921 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:59.921 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85149 00:16:00.179 19:31:49 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 85099 00:16:00.179 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85099 ']' 00:16:00.179 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85099 00:16:00.179 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:00.179 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:00.179 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85099 00:16:00.179 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:00.179 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:00.179 killing process with pid 85099 00:16:00.179 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85099' 00:16:00.179 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85099 00:16:00.179 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85099 00:16:00.437 19:31:49 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:16:00.437 19:31:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:00.437 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:00.437 19:31:49 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:16:00.437 "subsystems": [ 00:16:00.437 { 00:16:00.437 "subsystem": "keyring", 00:16:00.437 "config": [ 00:16:00.437 { 00:16:00.437 "method": "keyring_file_add_key", 00:16:00.437 "params": { 00:16:00.438 "name": "key0", 00:16:00.438 "path": "/tmp/tmp.2n47uQD22B" 00:16:00.438 } 00:16:00.438 } 00:16:00.438 ] 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "subsystem": "iobuf", 00:16:00.438 "config": [ 00:16:00.438 { 00:16:00.438 "method": "iobuf_set_options", 00:16:00.438 "params": { 00:16:00.438 "large_bufsize": 135168, 00:16:00.438 "large_pool_count": 1024, 00:16:00.438 "small_bufsize": 8192, 00:16:00.438 "small_pool_count": 8192 00:16:00.438 } 00:16:00.438 } 00:16:00.438 ] 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "subsystem": "sock", 00:16:00.438 "config": [ 00:16:00.438 { 00:16:00.438 "method": "sock_set_default_impl", 00:16:00.438 "params": { 00:16:00.438 "impl_name": "posix" 00:16:00.438 } 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "method": "sock_impl_set_options", 00:16:00.438 "params": { 00:16:00.438 "enable_ktls": false, 00:16:00.438 "enable_placement_id": 0, 00:16:00.438 "enable_quickack": false, 00:16:00.438 "enable_recv_pipe": true, 00:16:00.438 "enable_zerocopy_send_client": false, 00:16:00.438 "enable_zerocopy_send_server": true, 00:16:00.438 "impl_name": "ssl", 00:16:00.438 "recv_buf_size": 4096, 00:16:00.438 "send_buf_size": 4096, 00:16:00.438 "tls_version": 0, 00:16:00.438 "zerocopy_threshold": 0 00:16:00.438 } 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "method": "sock_impl_set_options", 00:16:00.438 "params": { 00:16:00.438 "enable_ktls": false, 00:16:00.438 "enable_placement_id": 0, 00:16:00.438 "enable_quickack": false, 00:16:00.438 "enable_recv_pipe": true, 00:16:00.438 "enable_zerocopy_send_client": false, 00:16:00.438 "enable_zerocopy_send_server": true, 00:16:00.438 "impl_name": "posix", 00:16:00.438 "recv_buf_size": 2097152, 00:16:00.438 "send_buf_size": 2097152, 00:16:00.438 "tls_version": 0, 00:16:00.438 "zerocopy_threshold": 0 00:16:00.438 } 00:16:00.438 } 00:16:00.438 ] 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "subsystem": "vmd", 00:16:00.438 "config": [] 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "subsystem": "accel", 00:16:00.438 "config": [ 00:16:00.438 { 00:16:00.438 "method": "accel_set_options", 00:16:00.438 "params": { 00:16:00.438 "buf_count": 2048, 00:16:00.438 "large_cache_size": 16, 00:16:00.438 "sequence_count": 2048, 00:16:00.438 "small_cache_size": 128, 00:16:00.438 "task_count": 2048 00:16:00.438 } 00:16:00.438 } 00:16:00.438 ] 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "subsystem": "bdev", 00:16:00.438 "config": [ 00:16:00.438 { 00:16:00.438 "method": "bdev_set_options", 00:16:00.438 "params": { 00:16:00.438 "bdev_auto_examine": true, 00:16:00.438 "bdev_io_cache_size": 256, 00:16:00.438 "bdev_io_pool_size": 65535, 00:16:00.438 "iobuf_large_cache_size": 16, 00:16:00.438 "iobuf_small_cache_size": 128 00:16:00.438 } 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "method": "bdev_raid_set_options", 00:16:00.438 "params": { 00:16:00.438 "process_window_size_kb": 1024 00:16:00.438 } 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "method": "bdev_iscsi_set_options", 00:16:00.438 "params": { 00:16:00.438 "timeout_sec": 30 00:16:00.438 } 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "method": "bdev_nvme_set_options", 00:16:00.438 "params": { 00:16:00.438 "action_on_timeout": "none", 00:16:00.438 "allow_accel_sequence": false, 00:16:00.438 "arbitration_burst": 0, 00:16:00.438 "bdev_retry_count": 3, 00:16:00.438 "ctrlr_loss_timeout_sec": 0, 00:16:00.438 "delay_cmd_submit": true, 00:16:00.438 "dhchap_dhgroups": [ 00:16:00.438 "null", 00:16:00.438 "ffdhe2048", 00:16:00.438 "ffdhe3072", 00:16:00.438 "ffdhe4096", 00:16:00.438 "ffdhe6144", 00:16:00.438 "ffdhe8192" 00:16:00.438 ], 00:16:00.438 "dhchap_digests": [ 00:16:00.438 "sha256", 00:16:00.438 "sha384", 00:16:00.438 "sha512" 00:16:00.438 ], 00:16:00.438 "disable_auto_failback": false, 00:16:00.438 "fast_io_fail_timeout_sec": 0, 00:16:00.438 "generate_uuids": false, 00:16:00.438 "high_priority_weight": 0, 00:16:00.438 "io_path_stat": false, 00:16:00.438 "io_queue_requests": 0, 00:16:00.438 "keep_alive_timeout_ms": 10000, 00:16:00.438 "low_priority_weight": 0, 00:16:00.438 "medium_priority_weight": 0, 00:16:00.438 "nvme_adminq_poll_period_us": 10000, 00:16:00.438 "nvme_error_stat": false, 00:16:00.438 "nvme_ioq_poll_period_us": 0, 00:16:00.438 "rdma_cm_event_timeout_ms": 0, 00:16:00.438 "rdma_max_cq_size": 0, 00:16:00.438 "rdma_srq_size": 0, 00:16:00.438 "reconnect_delay_sec": 0, 00:16:00.438 "timeout_admin_us": 0, 00:16:00.438 "timeout_us": 0, 00:16:00.438 "transport_ack_timeout": 0, 00:16:00.438 "transport_retry_count": 4, 00:16:00.438 "transport_tos": 0 00:16:00.438 } 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "method": "bdev_nvme_set_hotplug", 00:16:00.438 "params": { 00:16:00.438 "enable": false, 00:16:00.438 "period_us": 100000 00:16:00.438 } 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "method": "bdev_malloc_create", 00:16:00.438 "params": { 00:16:00.438 "block_size": 4096, 00:16:00.438 "name": "malloc0", 00:16:00.438 "num_blocks": 8192, 00:16:00.438 "optimal_io_boundary": 0, 00:16:00.438 "physical_block_size": 4096, 00:16:00.438 "uuid": "fc6f382c-48c5-4728-b937-f51c33ed1c66" 00:16:00.438 } 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "method": "bdev_wait_for_examine" 00:16:00.438 } 00:16:00.438 ] 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "subsystem": "nbd", 00:16:00.438 "config": [] 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "subsystem": "scheduler", 00:16:00.438 "config": [ 00:16:00.438 { 00:16:00.438 "method": "framework_set_scheduler", 00:16:00.438 "params": { 00:16:00.438 "name": "static" 00:16:00.438 } 00:16:00.438 } 00:16:00.438 ] 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "subsystem": "nvmf", 00:16:00.438 "config": [ 00:16:00.438 { 00:16:00.438 "method": "nvmf_set_config", 00:16:00.438 "params": { 00:16:00.438 "admin_cmd_passthru": { 00:16:00.438 "identify_ctrlr": false 00:16:00.438 }, 00:16:00.438 "discovery_filter": "match_any" 00:16:00.438 } 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "method": "nvmf_set_max_subsystems", 00:16:00.438 "params": { 00:16:00.438 "max_subsystems": 1024 00:16:00.438 } 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "method": "nvmf_set_crdt", 00:16:00.438 "params": { 00:16:00.438 "crdt1": 0, 00:16:00.438 "crdt2": 0, 00:16:00.438 "crdt3": 0 00:16:00.438 } 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "method": "nvmf_create_transport", 00:16:00.438 "params": { 00:16:00.438 "abort_timeout_sec": 1, 00:16:00.438 "ack_timeout": 0, 00:16:00.438 "buf_cache_size": 4294967295, 00:16:00.438 "c2h_success": false, 00:16:00.438 "data_wr_pool_size": 0, 00:16:00.438 "dif_insert_or_strip": false, 00:16:00.438 "in_capsule_data_size": 4096, 00:16:00.438 "io_unit_size": 131072, 00:16:00.438 "max_aq_depth": 128, 00:16:00.438 "max_io_qpairs_per_ctrlr": 127, 00:16:00.438 "max_io_size": 131072, 00:16:00.438 "max_queue_depth": 128, 00:16:00.438 "num_shared_buffers": 511, 00:16:00.438 "sock_priority": 0, 00:16:00.438 "trtype": "TCP", 00:16:00.438 "zcopy": false 00:16:00.438 } 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "method": "nvmf_create_subsystem", 00:16:00.438 "params": { 00:16:00.438 "allow_any_host": false, 00:16:00.438 "ana_reporting": false, 00:16:00.438 "max_cntlid": 65519, 00:16:00.438 "max_namespaces": 32, 00:16:00.438 "min_cntlid": 1, 00:16:00.438 "model_number": "SPDK bdev Controller", 00:16:00.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.438 "serial_number": "00000000000000000000" 00:16:00.438 } 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "method": "nvmf_subsystem_add_host", 00:16:00.438 "params": { 00:16:00.438 "host": "nqn.2016-06.io.spdk:host1", 00:16:00.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.438 "psk": "key0" 00:16:00.438 } 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "method": "nvmf_subsystem_add_ns", 00:16:00.438 "params": { 00:16:00.438 "namespace": { 00:16:00.438 "bdev_name": "malloc0", 00:16:00.438 "nguid": "FC6F382C48C54728B937F51C33ED1C66", 00:16:00.438 "no_auto_visible": false, 00:16:00.438 "nsid": 1, 00:16:00.438 "uuid": "fc6f382c-48c5-4728-b937-f51c33ed1c66" 00:16:00.438 }, 00:16:00.438 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:00.438 } 00:16:00.438 }, 00:16:00.438 { 00:16:00.438 "method": "nvmf_subsystem_add_listener", 00:16:00.438 "params": { 00:16:00.438 "listen_address": { 00:16:00.438 "adrfam": "IPv4", 00:16:00.438 "traddr": "10.0.0.2", 00:16:00.438 "trsvcid": "4420", 00:16:00.438 "trtype": "TCP" 00:16:00.438 }, 00:16:00.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.438 "secure_channel": false, 00:16:00.438 "sock_impl": "ssl" 00:16:00.438 } 00:16:00.438 } 00:16:00.438 ] 00:16:00.438 } 00:16:00.438 ] 00:16:00.438 }' 00:16:00.438 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:00.438 19:31:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85232 00:16:00.438 19:31:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:00.438 19:31:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85232 00:16:00.438 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85232 ']' 00:16:00.438 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.439 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.439 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.439 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.439 19:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:00.439 [2024-07-15 19:31:50.043095] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:16:00.439 [2024-07-15 19:31:50.043200] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.439 [2024-07-15 19:31:50.179261] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.439 [2024-07-15 19:31:50.240612] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.439 [2024-07-15 19:31:50.240677] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.439 [2024-07-15 19:31:50.240689] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.439 [2024-07-15 19:31:50.240698] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.439 [2024-07-15 19:31:50.240705] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.697 [2024-07-15 19:31:50.240810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.697 [2024-07-15 19:31:50.443176] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:00.697 [2024-07-15 19:31:50.475094] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:00.697 [2024-07-15 19:31:50.475475] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:00.955 19:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:00.955 19:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:00.955 19:31:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:00.955 19:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:00.955 19:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:00.955 19:31:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.955 19:31:50 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=85257 00:16:00.955 19:31:50 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 85257 /var/tmp/bdevperf.sock 00:16:00.955 19:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85257 ']' 00:16:00.955 19:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:00.955 19:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.955 19:31:50 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:00.955 19:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:00.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:00.955 19:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.955 19:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:00.955 19:31:50 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:16:00.955 "subsystems": [ 00:16:00.955 { 00:16:00.955 "subsystem": "keyring", 00:16:00.955 "config": [ 00:16:00.955 { 00:16:00.955 "method": "keyring_file_add_key", 00:16:00.955 "params": { 00:16:00.955 "name": "key0", 00:16:00.955 "path": "/tmp/tmp.2n47uQD22B" 00:16:00.955 } 00:16:00.955 } 00:16:00.955 ] 00:16:00.955 }, 00:16:00.955 { 00:16:00.955 "subsystem": "iobuf", 00:16:00.955 "config": [ 00:16:00.955 { 00:16:00.955 "method": "iobuf_set_options", 00:16:00.955 "params": { 00:16:00.955 "large_bufsize": 135168, 00:16:00.955 "large_pool_count": 1024, 00:16:00.955 "small_bufsize": 8192, 00:16:00.955 "small_pool_count": 8192 00:16:00.955 } 00:16:00.955 } 00:16:00.955 ] 00:16:00.955 }, 00:16:00.955 { 00:16:00.955 "subsystem": "sock", 00:16:00.956 "config": [ 00:16:00.956 { 00:16:00.956 "method": "sock_set_default_impl", 00:16:00.956 "params": { 00:16:00.956 "impl_name": "posix" 00:16:00.956 } 00:16:00.956 }, 00:16:00.956 { 00:16:00.956 "method": "sock_impl_set_options", 00:16:00.956 "params": { 00:16:00.956 "enable_ktls": false, 00:16:00.956 "enable_placement_id": 0, 00:16:00.956 "enable_quickack": false, 00:16:00.956 "enable_recv_pipe": true, 00:16:00.956 "enable_zerocopy_send_client": false, 00:16:00.956 "enable_zerocopy_send_server": true, 00:16:00.956 "impl_name": "ssl", 00:16:00.956 "recv_buf_size": 4096, 00:16:00.956 "send_buf_size": 4096, 00:16:00.956 "tls_version": 0, 00:16:00.956 "zerocopy_threshold": 0 00:16:00.956 } 00:16:00.956 }, 00:16:00.956 { 00:16:00.956 "method": "sock_impl_set_options", 00:16:00.956 "params": { 00:16:00.956 "enable_ktls": false, 00:16:00.956 "enable_placement_id": 0, 00:16:00.956 "enable_quickack": false, 00:16:00.956 "enable_recv_pipe": true, 00:16:00.956 "enable_zerocopy_send_client": false, 00:16:00.956 "enable_zerocopy_send_server": true, 00:16:00.956 "impl_name": "posix", 00:16:00.956 "recv_buf_size": 2097152, 00:16:00.956 "send_buf_size": 2097152, 00:16:00.956 "tls_version": 0, 00:16:00.956 "zerocopy_threshold": 0 00:16:00.956 } 00:16:00.956 } 00:16:00.956 ] 00:16:00.956 }, 00:16:00.956 { 00:16:00.956 "subsystem": "vmd", 00:16:00.956 "config": [] 00:16:00.956 }, 00:16:00.956 { 00:16:00.956 "subsystem": "accel", 00:16:00.956 "config": [ 00:16:00.956 { 00:16:00.956 "method": "accel_set_options", 00:16:00.956 "params": { 00:16:00.956 "buf_count": 2048, 00:16:00.956 "large_cache_size": 16, 00:16:00.956 "sequence_count": 2048, 00:16:00.956 "small_cache_size": 128, 00:16:00.956 "task_count": 2048 00:16:00.956 } 00:16:00.956 } 00:16:00.956 ] 00:16:00.956 }, 00:16:00.956 { 00:16:00.956 "subsystem": "bdev", 00:16:00.956 "config": [ 00:16:00.956 { 00:16:00.956 "method": "bdev_set_options", 00:16:00.956 "params": { 00:16:00.956 "bdev_auto_examine": true, 00:16:00.956 "bdev_io_cache_size": 256, 00:16:00.956 "bdev_io_pool_size": 65535, 00:16:00.956 "iobuf_large_cache_size": 16, 00:16:00.956 "iobuf_small_cache_size": 128 00:16:00.956 } 00:16:00.956 }, 00:16:00.956 { 00:16:00.956 "method": "bdev_raid_set_options", 00:16:00.956 "params": { 00:16:00.956 "process_window_size_kb": 1024 00:16:00.956 } 00:16:00.956 }, 00:16:00.956 { 00:16:00.956 "method": "bdev_iscsi_set_options", 00:16:00.956 "params": { 00:16:00.956 "timeout_sec": 30 00:16:00.956 } 00:16:00.956 }, 00:16:00.956 { 00:16:00.956 "method": "bdev_nvme_set_options", 00:16:00.956 "params": { 00:16:00.956 "action_on_timeout": "none", 00:16:00.956 "allow_accel_sequence": false, 00:16:00.956 "arbitration_burst": 0, 00:16:00.956 "bdev_retry_count": 3, 00:16:00.956 "ctrlr_loss_timeout_sec": 0, 00:16:00.956 "delay_cmd_submit": true, 00:16:00.956 "dhchap_dhgroups": [ 00:16:00.956 "null", 00:16:00.956 "ffdhe2048", 00:16:00.956 "ffdhe3072", 00:16:00.956 "ffdhe4096", 00:16:00.956 "ffdhe6144", 00:16:00.956 "ffdhe8192" 00:16:00.956 ], 00:16:00.956 "dhchap_digests": [ 00:16:00.956 "sha256", 00:16:00.956 "sha384", 00:16:00.956 "sha512" 00:16:00.956 ], 00:16:00.956 "disable_auto_failback": false, 00:16:00.956 "fast_io_fail_timeout_sec": 0, 00:16:00.956 "generate_uuids": false, 00:16:00.956 "high_priority_weight": 0, 00:16:00.956 "io_path_stat": false, 00:16:00.956 "io_queue_requests": 512, 00:16:00.956 "keep_alive_timeout_ms": 10000, 00:16:00.956 "low_priority_weight": 0, 00:16:00.956 "medium_priority_weight": 0, 00:16:00.956 "nvme_adminq_poll_period_us": 10000, 00:16:00.956 "nvme_error_stat": false, 00:16:00.956 "nvme_ioq_poll_period_us": 0, 00:16:00.956 "rdma_cm_event_timeout_ms": 0, 00:16:00.956 "rdma_max_cq_size": 0, 00:16:00.956 "rdma_srq_size": 0, 00:16:00.956 "reconnect_delay_sec": 0, 00:16:00.956 "timeout_admin_us": 0, 00:16:00.956 "timeout_us": 0, 00:16:00.956 "transport_ack_timeout": 0, 00:16:00.956 "transport_retry_count": 4, 00:16:00.956 "transport_tos": 0 00:16:00.956 } 00:16:00.956 }, 00:16:00.956 { 00:16:00.956 "method": "bdev_nvme_attach_controller", 00:16:00.956 "params": { 00:16:00.956 "adrfam": "IPv4", 00:16:00.956 "ctrlr_loss_timeout_sec": 0, 00:16:00.956 "ddgst": false, 00:16:00.956 "fast_io_fail_timeout_sec": 0, 00:16:00.956 "hdgst": false, 00:16:00.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:00.956 "name": "nvme0", 00:16:00.956 "prchk_guard": false, 00:16:00.956 "prchk_reftag": false, 00:16:00.956 "psk": "key0", 00:16:00.956 "reconnect_delay_sec": 0, 00:16:00.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.956 "traddr": "10.0.0.2", 00:16:00.956 "trsvcid": "4420", 00:16:00.956 "trtype": "TCP" 00:16:00.956 } 00:16:00.956 }, 00:16:00.956 { 00:16:00.956 "method": "bdev_nvme_set_hotplug", 00:16:00.956 "params": { 00:16:00.956 "enable": false, 00:16:00.956 "period_us": 100000 00:16:00.956 } 00:16:00.956 }, 00:16:00.956 { 00:16:00.956 "method": "bdev_enable_histogram", 00:16:00.956 "params": { 00:16:00.956 "enable": true, 00:16:00.956 "name": "nvme0n1" 00:16:00.956 } 00:16:00.956 }, 00:16:00.956 { 00:16:00.956 "method": "bdev_wait_for_examine" 00:16:00.956 } 00:16:00.956 ] 00:16:00.956 }, 00:16:00.956 { 00:16:00.956 "subsystem": "nbd", 00:16:00.956 "config": [] 00:16:00.956 } 00:16:00.956 ] 00:16:00.956 }' 00:16:00.956 [2024-07-15 19:31:50.612156] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:16:00.956 [2024-07-15 19:31:50.612283] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85257 ] 00:16:01.215 [2024-07-15 19:31:50.761908] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.215 [2024-07-15 19:31:50.850725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.215 [2024-07-15 19:31:50.999802] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:02.148 19:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:02.148 19:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:02.148 19:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:02.148 19:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:16:02.406 19:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.406 19:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:02.406 Running I/O for 1 seconds... 00:16:03.341 00:16:03.341 Latency(us) 00:16:03.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.341 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:03.341 Verification LBA range: start 0x0 length 0x2000 00:16:03.341 nvme0n1 : 1.02 3709.49 14.49 0.00 0.00 34115.17 6076.97 30504.03 00:16:03.341 =================================================================================================================== 00:16:03.341 Total : 3709.49 14.49 0.00 0.00 34115.17 6076.97 30504.03 00:16:03.341 0 00:16:03.341 19:31:53 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:16:03.341 19:31:53 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:16:03.341 19:31:53 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:03.341 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:16:03.341 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:16:03.341 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:03.341 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:03.341 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:03.341 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:03.341 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:03.341 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:03.341 nvmf_trace.0 00:16:03.598 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:16:03.598 19:31:53 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 85257 00:16:03.598 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85257 ']' 00:16:03.598 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85257 00:16:03.598 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:03.598 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:03.598 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85257 00:16:03.598 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:03.598 killing process with pid 85257 00:16:03.598 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:03.598 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85257' 00:16:03.598 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85257 00:16:03.598 Received shutdown signal, test time was about 1.000000 seconds 00:16:03.598 00:16:03.598 Latency(us) 00:16:03.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.598 =================================================================================================================== 00:16:03.598 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:03.598 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85257 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:03.856 rmmod nvme_tcp 00:16:03.856 rmmod nvme_fabrics 00:16:03.856 rmmod nvme_keyring 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 85232 ']' 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 85232 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85232 ']' 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85232 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85232 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:03.856 killing process with pid 85232 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85232' 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85232 00:16:03.856 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85232 00:16:04.114 19:31:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:04.114 19:31:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:04.114 19:31:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:04.114 19:31:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:04.114 19:31:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:04.114 19:31:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.114 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.114 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.114 19:31:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:04.114 19:31:53 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.kDESYgoLNW /tmp/tmp.i58FJ0Ywj5 /tmp/tmp.2n47uQD22B 00:16:04.114 00:16:04.114 real 1m24.731s 00:16:04.114 user 2m14.554s 00:16:04.114 sys 0m28.325s 00:16:04.114 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.114 19:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:04.114 ************************************ 00:16:04.114 END TEST nvmf_tls 00:16:04.114 ************************************ 00:16:04.114 19:31:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:04.114 19:31:53 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:04.114 19:31:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:04.114 19:31:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.114 19:31:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:04.114 ************************************ 00:16:04.114 START TEST nvmf_fips 00:16:04.114 ************************************ 00:16:04.114 19:31:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:04.114 * Looking for test storage... 00:16:04.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:04.114 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:04.114 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:04.114 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.114 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.114 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:16:04.115 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:16:04.375 19:31:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:16:04.375 19:31:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:16:04.375 19:31:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:16:04.375 19:31:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:04.375 19:31:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:16:04.375 19:31:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:16:04.375 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:16:04.375 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:04.375 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:16:04.375 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:04.375 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:16:04.375 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:04.375 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:16:04.375 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:04.375 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:16:04.375 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:16:04.376 Error setting digest 00:16:04.376 0022161B017F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:16:04.376 0022161B017F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:04.376 Cannot find device "nvmf_tgt_br" 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:04.376 Cannot find device "nvmf_tgt_br2" 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:04.376 Cannot find device "nvmf_tgt_br" 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:04.376 Cannot find device "nvmf_tgt_br2" 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:16:04.376 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:04.645 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:04.645 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:04.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:16:04.645 00:16:04.645 --- 10.0.0.2 ping statistics --- 00:16:04.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.645 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:04.645 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:04.645 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:16:04.645 00:16:04.645 --- 10.0.0.3 ping statistics --- 00:16:04.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.645 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:04.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:16:04.645 00:16:04.645 --- 10.0.0.1 ping statistics --- 00:16:04.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.645 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=85550 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:04.645 19:31:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 85550 00:16:04.903 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85550 ']' 00:16:04.903 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.903 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.903 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.903 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.903 19:31:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:04.903 [2024-07-15 19:31:54.563349] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:16:04.903 [2024-07-15 19:31:54.563484] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.903 [2024-07-15 19:31:54.705119] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.161 [2024-07-15 19:31:54.793054] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.161 [2024-07-15 19:31:54.793145] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.161 [2024-07-15 19:31:54.793171] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.161 [2024-07-15 19:31:54.793187] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.161 [2024-07-15 19:31:54.793199] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.161 [2024-07-15 19:31:54.793245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.092 19:31:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.092 19:31:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:16:06.092 19:31:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:06.092 19:31:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:06.092 19:31:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:06.092 19:31:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.092 19:31:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:16:06.092 19:31:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:06.092 19:31:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:06.092 19:31:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:06.092 19:31:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:06.092 19:31:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:06.092 19:31:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:06.092 19:31:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:06.092 [2024-07-15 19:31:55.834881] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.092 [2024-07-15 19:31:55.850877] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:06.092 [2024-07-15 19:31:55.851175] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.092 [2024-07-15 19:31:55.878352] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:06.092 malloc0 00:16:06.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:06.350 19:31:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:06.350 19:31:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=85606 00:16:06.350 19:31:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:06.350 19:31:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 85606 /var/tmp/bdevperf.sock 00:16:06.350 19:31:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85606 ']' 00:16:06.350 19:31:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:06.350 19:31:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.350 19:31:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:06.350 19:31:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.350 19:31:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:06.350 [2024-07-15 19:31:56.005119] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:16:06.350 [2024-07-15 19:31:56.005210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85606 ] 00:16:06.350 [2024-07-15 19:31:56.139912] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.607 [2024-07-15 19:31:56.202731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.607 19:31:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.607 19:31:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:16:06.607 19:31:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:06.865 [2024-07-15 19:31:56.512575] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:06.865 [2024-07-15 19:31:56.512685] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:06.865 TLSTESTn1 00:16:06.865 19:31:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:07.123 Running I/O for 10 seconds... 00:16:17.161 00:16:17.161 Latency(us) 00:16:17.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.161 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:17.161 Verification LBA range: start 0x0 length 0x2000 00:16:17.161 TLSTESTn1 : 10.03 3566.73 13.93 0.00 0.00 35807.10 7477.06 37653.41 00:16:17.161 =================================================================================================================== 00:16:17.161 Total : 3566.73 13.93 0.00 0.00 35807.10 7477.06 37653.41 00:16:17.161 0 00:16:17.161 19:32:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:17.161 19:32:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:17.161 19:32:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:16:17.161 19:32:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:16:17.161 19:32:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:17.161 19:32:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:17.162 19:32:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:17.162 19:32:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:17.162 19:32:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:17.162 19:32:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:17.162 nvmf_trace.0 00:16:17.162 19:32:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:16:17.162 19:32:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85606 00:16:17.162 19:32:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85606 ']' 00:16:17.162 19:32:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85606 00:16:17.162 19:32:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:16:17.162 19:32:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:17.162 19:32:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85606 00:16:17.162 killing process with pid 85606 00:16:17.162 19:32:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:17.162 19:32:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:17.162 19:32:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85606' 00:16:17.162 19:32:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85606 00:16:17.162 Received shutdown signal, test time was about 10.000000 seconds 00:16:17.162 00:16:17.162 Latency(us) 00:16:17.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.162 =================================================================================================================== 00:16:17.162 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:17.162 [2024-07-15 19:32:06.933005] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:17.162 19:32:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85606 00:16:17.420 19:32:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:17.420 19:32:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:17.420 19:32:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:16:17.420 19:32:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:17.420 19:32:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:16:17.420 19:32:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:17.420 19:32:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:17.420 rmmod nvme_tcp 00:16:17.420 rmmod nvme_fabrics 00:16:17.420 rmmod nvme_keyring 00:16:17.420 19:32:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:17.420 19:32:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:16:17.420 19:32:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:16:17.420 19:32:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 85550 ']' 00:16:17.420 19:32:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 85550 00:16:17.420 19:32:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85550 ']' 00:16:17.420 19:32:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85550 00:16:17.420 19:32:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:16:17.420 19:32:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:17.420 19:32:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85550 00:16:17.681 killing process with pid 85550 00:16:17.681 19:32:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:17.681 19:32:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:17.681 19:32:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85550' 00:16:17.681 19:32:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85550 00:16:17.681 [2024-07-15 19:32:07.225445] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:17.681 19:32:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85550 00:16:17.681 19:32:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:17.681 19:32:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:17.681 19:32:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:17.681 19:32:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:17.681 19:32:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:17.681 19:32:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.681 19:32:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.681 19:32:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.681 19:32:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:17.681 19:32:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:17.681 00:16:17.681 real 0m13.647s 00:16:17.681 user 0m18.008s 00:16:17.681 sys 0m5.681s 00:16:17.681 19:32:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:17.681 ************************************ 00:16:17.681 END TEST nvmf_fips 00:16:17.681 ************************************ 00:16:17.681 19:32:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:17.681 19:32:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:17.681 19:32:07 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:16:17.681 19:32:07 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:16:17.681 19:32:07 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:16:17.681 19:32:07 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:17.941 19:32:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:17.941 19:32:07 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:16:17.941 19:32:07 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:17.941 19:32:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:17.941 19:32:07 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:16:17.941 19:32:07 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:17.941 19:32:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:17.941 19:32:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:17.941 19:32:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:17.941 ************************************ 00:16:17.941 START TEST nvmf_multicontroller 00:16:17.941 ************************************ 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:17.941 * Looking for test storage... 00:16:17.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:17.941 Cannot find device "nvmf_tgt_br" 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:17.941 Cannot find device "nvmf_tgt_br2" 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:17.941 Cannot find device "nvmf_tgt_br" 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:17.941 Cannot find device "nvmf_tgt_br2" 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:17.941 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:18.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:18.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:18.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:16:18.200 00:16:18.200 --- 10.0.0.2 ping statistics --- 00:16:18.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.200 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:18.200 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:18.200 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:16:18.200 00:16:18.200 --- 10.0.0.3 ping statistics --- 00:16:18.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.200 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:18.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:16:18.200 00:16:18.200 --- 10.0.0.1 ping statistics --- 00:16:18.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.200 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:18.200 19:32:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:18.458 19:32:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:16:18.458 19:32:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:18.458 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:18.458 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:18.458 19:32:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=85952 00:16:18.458 19:32:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 85952 00:16:18.458 19:32:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:18.458 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 85952 ']' 00:16:18.458 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.458 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:18.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.458 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.458 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:18.458 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:18.458 [2024-07-15 19:32:08.067876] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:16:18.458 [2024-07-15 19:32:08.068005] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.458 [2024-07-15 19:32:08.207558] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:18.717 [2024-07-15 19:32:08.270593] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.717 [2024-07-15 19:32:08.270647] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.717 [2024-07-15 19:32:08.270667] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.717 [2024-07-15 19:32:08.270680] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.717 [2024-07-15 19:32:08.270692] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.717 [2024-07-15 19:32:08.270834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.717 [2024-07-15 19:32:08.271507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:18.717 [2024-07-15 19:32:08.271518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.717 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:18.717 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:16:18.717 19:32:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:18.717 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:18.718 [2024-07-15 19:32:08.393030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:18.718 Malloc0 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:18.718 [2024-07-15 19:32:08.439295] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:18.718 [2024-07-15 19:32:08.447251] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:18.718 Malloc1 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=85992 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 85992 /var/tmp/bdevperf.sock 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 85992 ']' 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:18.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:18.718 19:32:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:20.096 NVMe0n1 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.096 1 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:20.096 2024/07/15 19:32:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:20.096 request: 00:16:20.096 { 00:16:20.096 "method": "bdev_nvme_attach_controller", 00:16:20.096 "params": { 00:16:20.096 "name": "NVMe0", 00:16:20.096 "trtype": "tcp", 00:16:20.096 "traddr": "10.0.0.2", 00:16:20.096 "adrfam": "ipv4", 00:16:20.096 "trsvcid": "4420", 00:16:20.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:20.096 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:16:20.096 "hostaddr": "10.0.0.2", 00:16:20.096 "hostsvcid": "60000", 00:16:20.096 "prchk_reftag": false, 00:16:20.096 "prchk_guard": false, 00:16:20.096 "hdgst": false, 00:16:20.096 "ddgst": false 00:16:20.096 } 00:16:20.096 } 00:16:20.096 Got JSON-RPC error response 00:16:20.096 GoRPCClient: error on JSON-RPC call 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:20.096 2024/07/15 19:32:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:20.096 request: 00:16:20.096 { 00:16:20.096 "method": "bdev_nvme_attach_controller", 00:16:20.096 "params": { 00:16:20.096 "name": "NVMe0", 00:16:20.096 "trtype": "tcp", 00:16:20.096 "traddr": "10.0.0.2", 00:16:20.096 "adrfam": "ipv4", 00:16:20.096 "trsvcid": "4420", 00:16:20.096 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:20.096 "hostaddr": "10.0.0.2", 00:16:20.096 "hostsvcid": "60000", 00:16:20.096 "prchk_reftag": false, 00:16:20.096 "prchk_guard": false, 00:16:20.096 "hdgst": false, 00:16:20.096 "ddgst": false 00:16:20.096 } 00:16:20.096 } 00:16:20.096 Got JSON-RPC error response 00:16:20.096 GoRPCClient: error on JSON-RPC call 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:20.096 2024/07/15 19:32:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:16:20.096 request: 00:16:20.096 { 00:16:20.096 "method": "bdev_nvme_attach_controller", 00:16:20.096 "params": { 00:16:20.096 "name": "NVMe0", 00:16:20.096 "trtype": "tcp", 00:16:20.096 "traddr": "10.0.0.2", 00:16:20.096 "adrfam": "ipv4", 00:16:20.096 "trsvcid": "4420", 00:16:20.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:20.096 "hostaddr": "10.0.0.2", 00:16:20.096 "hostsvcid": "60000", 00:16:20.096 "prchk_reftag": false, 00:16:20.096 "prchk_guard": false, 00:16:20.096 "hdgst": false, 00:16:20.096 "ddgst": false, 00:16:20.096 "multipath": "disable" 00:16:20.096 } 00:16:20.096 } 00:16:20.096 Got JSON-RPC error response 00:16:20.096 GoRPCClient: error on JSON-RPC call 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:20.096 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:20.097 2024/07/15 19:32:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:20.097 request: 00:16:20.097 { 00:16:20.097 "method": "bdev_nvme_attach_controller", 00:16:20.097 "params": { 00:16:20.097 "name": "NVMe0", 00:16:20.097 "trtype": "tcp", 00:16:20.097 "traddr": "10.0.0.2", 00:16:20.097 "adrfam": "ipv4", 00:16:20.097 "trsvcid": "4420", 00:16:20.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:20.097 "hostaddr": "10.0.0.2", 00:16:20.097 "hostsvcid": "60000", 00:16:20.097 "prchk_reftag": false, 00:16:20.097 "prchk_guard": false, 00:16:20.097 "hdgst": false, 00:16:20.097 "ddgst": false, 00:16:20.097 "multipath": "failover" 00:16:20.097 } 00:16:20.097 } 00:16:20.097 Got JSON-RPC error response 00:16:20.097 GoRPCClient: error on JSON-RPC call 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:20.097 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:20.097 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:16:20.097 19:32:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:21.484 0 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 85992 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 85992 ']' 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 85992 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85992 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:21.484 killing process with pid 85992 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85992' 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 85992 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 85992 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:16:21.484 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:21.484 [2024-07-15 19:32:08.561189] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:16:21.484 [2024-07-15 19:32:08.561329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85992 ] 00:16:21.484 [2024-07-15 19:32:08.706164] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.484 [2024-07-15 19:32:08.774716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.484 [2024-07-15 19:32:09.848057] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 5bce5d51-0431-405c-bacc-8701f7f24c62 already exists 00:16:21.484 [2024-07-15 19:32:09.848118] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:5bce5d51-0431-405c-bacc-8701f7f24c62 alias for bdev NVMe1n1 00:16:21.484 [2024-07-15 19:32:09.848137] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:16:21.484 Running I/O for 1 seconds... 00:16:21.484 00:16:21.484 Latency(us) 00:16:21.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.484 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:16:21.484 NVMe0n1 : 1.00 19241.22 75.16 0.00 0.00 6641.33 3932.16 14775.39 00:16:21.484 =================================================================================================================== 00:16:21.484 Total : 19241.22 75.16 0.00 0.00 6641.33 3932.16 14775.39 00:16:21.484 Received shutdown signal, test time was about 1.000000 seconds 00:16:21.484 00:16:21.484 Latency(us) 00:16:21.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.484 =================================================================================================================== 00:16:21.484 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:21.484 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:21.484 19:32:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:16:21.742 19:32:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:21.742 19:32:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:16:21.742 19:32:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:21.743 19:32:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:21.743 rmmod nvme_tcp 00:16:21.743 rmmod nvme_fabrics 00:16:21.743 rmmod nvme_keyring 00:16:21.743 19:32:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:21.743 19:32:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:16:21.743 19:32:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:16:21.743 19:32:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 85952 ']' 00:16:21.743 19:32:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 85952 00:16:21.743 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 85952 ']' 00:16:21.743 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 85952 00:16:21.743 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:16:21.743 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:21.743 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85952 00:16:21.743 killing process with pid 85952 00:16:21.743 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:21.743 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:21.743 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85952' 00:16:21.743 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 85952 00:16:21.743 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 85952 00:16:22.001 19:32:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:22.001 19:32:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:22.001 19:32:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:22.001 19:32:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:22.001 19:32:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:22.001 19:32:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.001 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.001 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.002 19:32:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:22.002 00:16:22.002 real 0m4.068s 00:16:22.002 user 0m12.864s 00:16:22.002 sys 0m0.923s 00:16:22.002 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:22.002 19:32:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:22.002 ************************************ 00:16:22.002 END TEST nvmf_multicontroller 00:16:22.002 ************************************ 00:16:22.002 19:32:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:22.002 19:32:11 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:22.002 19:32:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:22.002 19:32:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:22.002 19:32:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:22.002 ************************************ 00:16:22.002 START TEST nvmf_aer 00:16:22.002 ************************************ 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:22.002 * Looking for test storage... 00:16:22.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:22.002 Cannot find device "nvmf_tgt_br" 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.002 Cannot find device "nvmf_tgt_br2" 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:22.002 Cannot find device "nvmf_tgt_br" 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:16:22.002 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:22.261 Cannot find device "nvmf_tgt_br2" 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:22.261 19:32:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:22.261 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:22.261 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:22.261 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:22.261 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:22.261 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:22.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:16:22.261 00:16:22.261 --- 10.0.0.2 ping statistics --- 00:16:22.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.261 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:22.261 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:22.261 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:22.261 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:16:22.261 00:16:22.261 --- 10.0.0.3 ping statistics --- 00:16:22.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.261 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:22.261 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:22.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:16:22.520 00:16:22.520 --- 10.0.0.1 ping statistics --- 00:16:22.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.520 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=86242 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 86242 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 86242 ']' 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.520 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:22.520 [2024-07-15 19:32:12.144351] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:16:22.520 [2024-07-15 19:32:12.144454] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.520 [2024-07-15 19:32:12.284504] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:22.779 [2024-07-15 19:32:12.356242] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.779 [2024-07-15 19:32:12.356320] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.779 [2024-07-15 19:32:12.356335] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.779 [2024-07-15 19:32:12.356345] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.779 [2024-07-15 19:32:12.356354] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.779 [2024-07-15 19:32:12.356465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.779 [2024-07-15 19:32:12.356960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.779 [2024-07-15 19:32:12.357255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:22.779 [2024-07-15 19:32:12.357263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:22.779 [2024-07-15 19:32:12.489288] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:22.779 Malloc0 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:22.779 [2024-07-15 19:32:12.550056] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:22.779 [ 00:16:22.779 { 00:16:22.779 "allow_any_host": true, 00:16:22.779 "hosts": [], 00:16:22.779 "listen_addresses": [], 00:16:22.779 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:22.779 "subtype": "Discovery" 00:16:22.779 }, 00:16:22.779 { 00:16:22.779 "allow_any_host": true, 00:16:22.779 "hosts": [], 00:16:22.779 "listen_addresses": [ 00:16:22.779 { 00:16:22.779 "adrfam": "IPv4", 00:16:22.779 "traddr": "10.0.0.2", 00:16:22.779 "trsvcid": "4420", 00:16:22.779 "trtype": "TCP" 00:16:22.779 } 00:16:22.779 ], 00:16:22.779 "max_cntlid": 65519, 00:16:22.779 "max_namespaces": 2, 00:16:22.779 "min_cntlid": 1, 00:16:22.779 "model_number": "SPDK bdev Controller", 00:16:22.779 "namespaces": [ 00:16:22.779 { 00:16:22.779 "bdev_name": "Malloc0", 00:16:22.779 "name": "Malloc0", 00:16:22.779 "nguid": "4C40847958D84DB2B11642CCF395EBBA", 00:16:22.779 "nsid": 1, 00:16:22.779 "uuid": "4c408479-58d8-4db2-b116-42ccf395ebba" 00:16:22.779 } 00:16:22.779 ], 00:16:22.779 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:22.779 "serial_number": "SPDK00000000000001", 00:16:22.779 "subtype": "NVMe" 00:16:22.779 } 00:16:22.779 ] 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=86288 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:16:22.779 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:23.038 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:23.038 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:16:23.038 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:16:23.038 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:23.038 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:23.038 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:23.038 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:16:23.038 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:16:23.038 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.038 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:23.038 Malloc1 00:16:23.038 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.038 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:16:23.038 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.038 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:23.038 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.038 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:16:23.038 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.038 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:23.297 [ 00:16:23.297 { 00:16:23.297 "allow_any_host": true, 00:16:23.297 "hosts": [], 00:16:23.297 "listen_addresses": [], 00:16:23.297 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:23.297 "subtype": "Discovery" 00:16:23.297 }, 00:16:23.297 { 00:16:23.297 "allow_any_host": true, 00:16:23.297 "hosts": [], 00:16:23.297 "listen_addresses": [ 00:16:23.297 { 00:16:23.297 "adrfam": "IPv4", 00:16:23.297 "traddr": "10.0.0.2", 00:16:23.297 "trsvcid": "4420", 00:16:23.297 "trtype": "TCP" 00:16:23.297 } 00:16:23.297 ], 00:16:23.297 "max_cntlid": 65519, 00:16:23.297 "max_namespaces": 2, 00:16:23.297 "min_cntlid": 1, 00:16:23.297 "model_number": "SPDK bdev Controller", 00:16:23.297 "namespaces": [ 00:16:23.297 { 00:16:23.297 "bdev_name": "Malloc0", 00:16:23.297 "name": "Malloc0", 00:16:23.297 "nguid": "4C40847958D84DB2B11642CCF395EBBA", 00:16:23.297 "nsid": 1, 00:16:23.297 "uuid": "4c408479-58d8-4db2-b116-42ccf395ebba" 00:16:23.297 }, 00:16:23.297 { 00:16:23.297 "bdev_name": "Malloc1", 00:16:23.297 "name": "Malloc1", 00:16:23.297 "nguid": "1A5859FBC555483DB4515B3A12B71710", 00:16:23.297 "nsid": 2, 00:16:23.297 "uuid": "1a5859fb-c555-483d-b451-5b3a12b71710" 00:16:23.297 } 00:16:23.297 ], 00:16:23.297 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:23.297 "serial_number": "SPDK00000000000001", 00:16:23.297 "subtype": "NVMe" 00:16:23.297 } 00:16:23.297 ] 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 86288 00:16:23.297 Asynchronous Event Request test 00:16:23.297 Attaching to 10.0.0.2 00:16:23.297 Attached to 10.0.0.2 00:16:23.297 Registering asynchronous event callbacks... 00:16:23.297 Starting namespace attribute notice tests for all controllers... 00:16:23.297 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:23.297 aer_cb - Changed Namespace 00:16:23.297 Cleaning up... 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:23.297 rmmod nvme_tcp 00:16:23.297 rmmod nvme_fabrics 00:16:23.297 rmmod nvme_keyring 00:16:23.297 19:32:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:23.297 19:32:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:16:23.297 19:32:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:16:23.297 19:32:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 86242 ']' 00:16:23.297 19:32:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 86242 00:16:23.297 19:32:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 86242 ']' 00:16:23.297 19:32:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 86242 00:16:23.297 19:32:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:16:23.297 19:32:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:23.297 19:32:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86242 00:16:23.297 19:32:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:23.297 19:32:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:23.297 killing process with pid 86242 00:16:23.297 19:32:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86242' 00:16:23.297 19:32:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 86242 00:16:23.297 19:32:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 86242 00:16:23.556 19:32:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:23.556 19:32:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:23.556 19:32:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:23.556 19:32:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:23.556 19:32:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:23.556 19:32:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.556 19:32:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.556 19:32:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.556 19:32:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:23.556 00:16:23.556 real 0m1.594s 00:16:23.556 user 0m3.409s 00:16:23.556 sys 0m0.547s 00:16:23.556 19:32:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:23.556 19:32:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:23.556 ************************************ 00:16:23.556 END TEST nvmf_aer 00:16:23.556 ************************************ 00:16:23.556 19:32:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:23.556 19:32:13 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:23.556 19:32:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:23.556 19:32:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:23.556 19:32:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:23.556 ************************************ 00:16:23.556 START TEST nvmf_async_init 00:16:23.556 ************************************ 00:16:23.556 19:32:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:23.815 * Looking for test storage... 00:16:23.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=dcebf79851be4be9a51d81024228b640 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:23.815 Cannot find device "nvmf_tgt_br" 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.815 Cannot find device "nvmf_tgt_br2" 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:23.815 Cannot find device "nvmf_tgt_br" 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:23.815 Cannot find device "nvmf_tgt_br2" 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.815 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.816 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.816 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:23.816 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:24.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:16:24.074 00:16:24.074 --- 10.0.0.2 ping statistics --- 00:16:24.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.074 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:24.074 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:24.074 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:16:24.074 00:16:24.074 --- 10.0.0.3 ping statistics --- 00:16:24.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.074 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:24.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:24.074 00:16:24.074 --- 10.0.0.1 ping statistics --- 00:16:24.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.074 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:24.074 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.075 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:24.075 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:24.075 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.075 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:24.075 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:24.075 19:32:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:16:24.075 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:24.075 19:32:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:24.075 19:32:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:24.075 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=86458 00:16:24.075 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:24.075 19:32:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 86458 00:16:24.075 19:32:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 86458 ']' 00:16:24.075 19:32:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.075 19:32:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.075 19:32:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.075 19:32:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.075 19:32:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:24.075 [2024-07-15 19:32:13.830602] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:16:24.075 [2024-07-15 19:32:13.830706] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.333 [2024-07-15 19:32:13.969420] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.333 [2024-07-15 19:32:14.039646] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.333 [2024-07-15 19:32:14.039705] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.333 [2024-07-15 19:32:14.039719] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.333 [2024-07-15 19:32:14.039730] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.333 [2024-07-15 19:32:14.039738] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.333 [2024-07-15 19:32:14.039772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.270 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.270 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:16:25.270 19:32:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:25.270 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:25.270 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:25.270 19:32:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.270 19:32:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:16:25.270 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.270 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:25.270 [2024-07-15 19:32:14.881981] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.270 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.270 19:32:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:16:25.270 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.270 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:25.270 null0 00:16:25.270 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.270 19:32:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:16:25.270 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.270 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:25.270 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.271 19:32:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:16:25.271 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.271 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:25.271 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.271 19:32:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g dcebf79851be4be9a51d81024228b640 00:16:25.271 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.271 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:25.271 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.271 19:32:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:25.271 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.271 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:25.271 [2024-07-15 19:32:14.922104] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.271 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.271 19:32:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:16:25.271 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.271 19:32:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:25.530 nvme0n1 00:16:25.530 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.530 19:32:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:25.530 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.530 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:25.530 [ 00:16:25.530 { 00:16:25.530 "aliases": [ 00:16:25.530 "dcebf798-51be-4be9-a51d-81024228b640" 00:16:25.530 ], 00:16:25.530 "assigned_rate_limits": { 00:16:25.530 "r_mbytes_per_sec": 0, 00:16:25.530 "rw_ios_per_sec": 0, 00:16:25.530 "rw_mbytes_per_sec": 0, 00:16:25.530 "w_mbytes_per_sec": 0 00:16:25.530 }, 00:16:25.530 "block_size": 512, 00:16:25.530 "claimed": false, 00:16:25.530 "driver_specific": { 00:16:25.530 "mp_policy": "active_passive", 00:16:25.530 "nvme": [ 00:16:25.530 { 00:16:25.530 "ctrlr_data": { 00:16:25.530 "ana_reporting": false, 00:16:25.530 "cntlid": 1, 00:16:25.530 "firmware_revision": "24.09", 00:16:25.530 "model_number": "SPDK bdev Controller", 00:16:25.530 "multi_ctrlr": true, 00:16:25.530 "oacs": { 00:16:25.530 "firmware": 0, 00:16:25.530 "format": 0, 00:16:25.530 "ns_manage": 0, 00:16:25.530 "security": 0 00:16:25.530 }, 00:16:25.530 "serial_number": "00000000000000000000", 00:16:25.530 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:25.530 "vendor_id": "0x8086" 00:16:25.530 }, 00:16:25.530 "ns_data": { 00:16:25.530 "can_share": true, 00:16:25.530 "id": 1 00:16:25.530 }, 00:16:25.530 "trid": { 00:16:25.530 "adrfam": "IPv4", 00:16:25.530 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:25.530 "traddr": "10.0.0.2", 00:16:25.530 "trsvcid": "4420", 00:16:25.530 "trtype": "TCP" 00:16:25.530 }, 00:16:25.530 "vs": { 00:16:25.530 "nvme_version": "1.3" 00:16:25.530 } 00:16:25.530 } 00:16:25.530 ] 00:16:25.530 }, 00:16:25.530 "memory_domains": [ 00:16:25.530 { 00:16:25.530 "dma_device_id": "system", 00:16:25.530 "dma_device_type": 1 00:16:25.530 } 00:16:25.530 ], 00:16:25.530 "name": "nvme0n1", 00:16:25.530 "num_blocks": 2097152, 00:16:25.530 "product_name": "NVMe disk", 00:16:25.530 "supported_io_types": { 00:16:25.530 "abort": true, 00:16:25.530 "compare": true, 00:16:25.530 "compare_and_write": true, 00:16:25.530 "copy": true, 00:16:25.530 "flush": true, 00:16:25.530 "get_zone_info": false, 00:16:25.530 "nvme_admin": true, 00:16:25.530 "nvme_io": true, 00:16:25.530 "nvme_io_md": false, 00:16:25.530 "nvme_iov_md": false, 00:16:25.530 "read": true, 00:16:25.530 "reset": true, 00:16:25.530 "seek_data": false, 00:16:25.530 "seek_hole": false, 00:16:25.530 "unmap": false, 00:16:25.530 "write": true, 00:16:25.530 "write_zeroes": true, 00:16:25.530 "zcopy": false, 00:16:25.530 "zone_append": false, 00:16:25.530 "zone_management": false 00:16:25.530 }, 00:16:25.530 "uuid": "dcebf798-51be-4be9-a51d-81024228b640", 00:16:25.530 "zoned": false 00:16:25.530 } 00:16:25.530 ] 00:16:25.530 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.530 19:32:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:16:25.530 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.530 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:25.530 [2024-07-15 19:32:15.187726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:25.530 [2024-07-15 19:32:15.187830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1402c20 (9): Bad file descriptor 00:16:25.530 [2024-07-15 19:32:15.319536] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:25.530 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.530 19:32:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:25.530 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.530 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:25.530 [ 00:16:25.530 { 00:16:25.530 "aliases": [ 00:16:25.530 "dcebf798-51be-4be9-a51d-81024228b640" 00:16:25.530 ], 00:16:25.789 "assigned_rate_limits": { 00:16:25.789 "r_mbytes_per_sec": 0, 00:16:25.789 "rw_ios_per_sec": 0, 00:16:25.789 "rw_mbytes_per_sec": 0, 00:16:25.789 "w_mbytes_per_sec": 0 00:16:25.789 }, 00:16:25.789 "block_size": 512, 00:16:25.789 "claimed": false, 00:16:25.789 "driver_specific": { 00:16:25.789 "mp_policy": "active_passive", 00:16:25.789 "nvme": [ 00:16:25.789 { 00:16:25.789 "ctrlr_data": { 00:16:25.789 "ana_reporting": false, 00:16:25.790 "cntlid": 2, 00:16:25.790 "firmware_revision": "24.09", 00:16:25.790 "model_number": "SPDK bdev Controller", 00:16:25.790 "multi_ctrlr": true, 00:16:25.790 "oacs": { 00:16:25.790 "firmware": 0, 00:16:25.790 "format": 0, 00:16:25.790 "ns_manage": 0, 00:16:25.790 "security": 0 00:16:25.790 }, 00:16:25.790 "serial_number": "00000000000000000000", 00:16:25.790 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:25.790 "vendor_id": "0x8086" 00:16:25.790 }, 00:16:25.790 "ns_data": { 00:16:25.790 "can_share": true, 00:16:25.790 "id": 1 00:16:25.790 }, 00:16:25.790 "trid": { 00:16:25.790 "adrfam": "IPv4", 00:16:25.790 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:25.790 "traddr": "10.0.0.2", 00:16:25.790 "trsvcid": "4420", 00:16:25.790 "trtype": "TCP" 00:16:25.790 }, 00:16:25.790 "vs": { 00:16:25.790 "nvme_version": "1.3" 00:16:25.790 } 00:16:25.790 } 00:16:25.790 ] 00:16:25.790 }, 00:16:25.790 "memory_domains": [ 00:16:25.790 { 00:16:25.790 "dma_device_id": "system", 00:16:25.790 "dma_device_type": 1 00:16:25.790 } 00:16:25.790 ], 00:16:25.790 "name": "nvme0n1", 00:16:25.790 "num_blocks": 2097152, 00:16:25.790 "product_name": "NVMe disk", 00:16:25.790 "supported_io_types": { 00:16:25.790 "abort": true, 00:16:25.790 "compare": true, 00:16:25.790 "compare_and_write": true, 00:16:25.790 "copy": true, 00:16:25.790 "flush": true, 00:16:25.790 "get_zone_info": false, 00:16:25.790 "nvme_admin": true, 00:16:25.790 "nvme_io": true, 00:16:25.790 "nvme_io_md": false, 00:16:25.790 "nvme_iov_md": false, 00:16:25.790 "read": true, 00:16:25.790 "reset": true, 00:16:25.790 "seek_data": false, 00:16:25.790 "seek_hole": false, 00:16:25.790 "unmap": false, 00:16:25.790 "write": true, 00:16:25.790 "write_zeroes": true, 00:16:25.790 "zcopy": false, 00:16:25.790 "zone_append": false, 00:16:25.790 "zone_management": false 00:16:25.790 }, 00:16:25.790 "uuid": "dcebf798-51be-4be9-a51d-81024228b640", 00:16:25.790 "zoned": false 00:16:25.790 } 00:16:25.790 ] 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.7szpV0ks8k 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.7szpV0ks8k 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:25.790 [2024-07-15 19:32:15.383924] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:25.790 [2024-07-15 19:32:15.384080] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7szpV0ks8k 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:25.790 [2024-07-15 19:32:15.391909] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7szpV0ks8k 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:25.790 [2024-07-15 19:32:15.399924] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:25.790 [2024-07-15 19:32:15.399990] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:25.790 nvme0n1 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.790 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:25.790 [ 00:16:25.790 { 00:16:25.790 "aliases": [ 00:16:25.790 "dcebf798-51be-4be9-a51d-81024228b640" 00:16:25.790 ], 00:16:25.790 "assigned_rate_limits": { 00:16:25.790 "r_mbytes_per_sec": 0, 00:16:25.790 "rw_ios_per_sec": 0, 00:16:25.790 "rw_mbytes_per_sec": 0, 00:16:25.790 "w_mbytes_per_sec": 0 00:16:25.790 }, 00:16:25.790 "block_size": 512, 00:16:25.790 "claimed": false, 00:16:25.790 "driver_specific": { 00:16:25.790 "mp_policy": "active_passive", 00:16:25.790 "nvme": [ 00:16:25.790 { 00:16:25.790 "ctrlr_data": { 00:16:25.790 "ana_reporting": false, 00:16:25.790 "cntlid": 3, 00:16:25.790 "firmware_revision": "24.09", 00:16:25.790 "model_number": "SPDK bdev Controller", 00:16:25.790 "multi_ctrlr": true, 00:16:25.790 "oacs": { 00:16:25.790 "firmware": 0, 00:16:25.790 "format": 0, 00:16:25.790 "ns_manage": 0, 00:16:25.790 "security": 0 00:16:25.790 }, 00:16:25.790 "serial_number": "00000000000000000000", 00:16:25.790 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:25.790 "vendor_id": "0x8086" 00:16:25.790 }, 00:16:25.790 "ns_data": { 00:16:25.790 "can_share": true, 00:16:25.790 "id": 1 00:16:25.790 }, 00:16:25.790 "trid": { 00:16:25.790 "adrfam": "IPv4", 00:16:25.790 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:25.790 "traddr": "10.0.0.2", 00:16:25.790 "trsvcid": "4421", 00:16:25.790 "trtype": "TCP" 00:16:25.790 }, 00:16:25.790 "vs": { 00:16:25.790 "nvme_version": "1.3" 00:16:25.790 } 00:16:25.790 } 00:16:25.790 ] 00:16:25.790 }, 00:16:25.790 "memory_domains": [ 00:16:25.790 { 00:16:25.790 "dma_device_id": "system", 00:16:25.790 "dma_device_type": 1 00:16:25.790 } 00:16:25.790 ], 00:16:25.790 "name": "nvme0n1", 00:16:25.790 "num_blocks": 2097152, 00:16:25.790 "product_name": "NVMe disk", 00:16:25.790 "supported_io_types": { 00:16:25.790 "abort": true, 00:16:25.790 "compare": true, 00:16:25.790 "compare_and_write": true, 00:16:25.790 "copy": true, 00:16:25.790 "flush": true, 00:16:25.790 "get_zone_info": false, 00:16:25.790 "nvme_admin": true, 00:16:25.790 "nvme_io": true, 00:16:25.790 "nvme_io_md": false, 00:16:25.790 "nvme_iov_md": false, 00:16:25.790 "read": true, 00:16:25.790 "reset": true, 00:16:25.790 "seek_data": false, 00:16:25.790 "seek_hole": false, 00:16:25.790 "unmap": false, 00:16:25.790 "write": true, 00:16:25.790 "write_zeroes": true, 00:16:25.790 "zcopy": false, 00:16:25.790 "zone_append": false, 00:16:25.790 "zone_management": false 00:16:25.790 }, 00:16:25.790 "uuid": "dcebf798-51be-4be9-a51d-81024228b640", 00:16:25.790 "zoned": false 00:16:25.790 } 00:16:25.791 ] 00:16:25.791 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.791 19:32:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:25.791 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.791 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:25.791 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.791 19:32:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.7szpV0ks8k 00:16:25.791 19:32:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:16:25.791 19:32:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:16:25.791 19:32:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:25.791 19:32:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:16:25.791 19:32:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:25.791 19:32:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:16:25.791 19:32:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:25.791 19:32:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:25.791 rmmod nvme_tcp 00:16:25.791 rmmod nvme_fabrics 00:16:26.049 rmmod nvme_keyring 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 86458 ']' 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 86458 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 86458 ']' 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 86458 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86458 00:16:26.049 killing process with pid 86458 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86458' 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 86458 00:16:26.049 [2024-07-15 19:32:15.655815] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:26.049 [2024-07-15 19:32:15.655853] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 86458 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.049 19:32:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:26.308 ************************************ 00:16:26.308 END TEST nvmf_async_init 00:16:26.308 ************************************ 00:16:26.308 00:16:26.308 real 0m2.567s 00:16:26.308 user 0m2.470s 00:16:26.308 sys 0m0.546s 00:16:26.308 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:26.308 19:32:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:26.308 19:32:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:26.308 19:32:15 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:26.308 19:32:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:26.308 19:32:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:26.308 19:32:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:26.308 ************************************ 00:16:26.308 START TEST dma 00:16:26.308 ************************************ 00:16:26.308 19:32:15 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:26.308 * Looking for test storage... 00:16:26.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:26.308 19:32:15 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:26.308 19:32:15 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.308 19:32:15 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.308 19:32:15 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.308 19:32:15 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.308 19:32:15 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.308 19:32:15 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.308 19:32:15 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:16:26.308 19:32:15 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:26.308 19:32:15 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:26.308 19:32:15 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:16:26.308 19:32:15 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:16:26.308 00:16:26.308 real 0m0.095s 00:16:26.308 user 0m0.045s 00:16:26.308 sys 0m0.054s 00:16:26.308 19:32:15 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:26.308 ************************************ 00:16:26.308 END TEST dma 00:16:26.308 19:32:15 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:16:26.308 ************************************ 00:16:26.308 19:32:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:26.308 19:32:16 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:26.308 19:32:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:26.308 19:32:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:26.308 19:32:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:26.308 ************************************ 00:16:26.308 START TEST nvmf_identify 00:16:26.308 ************************************ 00:16:26.308 19:32:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:26.569 * Looking for test storage... 00:16:26.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.569 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:26.570 Cannot find device "nvmf_tgt_br" 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.570 Cannot find device "nvmf_tgt_br2" 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:26.570 Cannot find device "nvmf_tgt_br" 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:26.570 Cannot find device "nvmf_tgt_br2" 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:26.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:26.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:26.570 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:26.836 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:26.836 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:26.836 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:26.836 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:26.836 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:26.836 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:26.836 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:26.836 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:26.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:26.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:16:26.837 00:16:26.837 --- 10.0.0.2 ping statistics --- 00:16:26.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.837 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:26.837 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:26.837 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:26.837 00:16:26.837 --- 10.0.0.3 ping statistics --- 00:16:26.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.837 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:26.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:26.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:26.837 00:16:26.837 --- 10.0.0.1 ping statistics --- 00:16:26.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.837 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86726 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86726 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 86726 ']' 00:16:26.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.837 19:32:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:26.837 [2024-07-15 19:32:16.558187] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:16:26.837 [2024-07-15 19:32:16.558307] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.096 [2024-07-15 19:32:16.700418] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:27.096 [2024-07-15 19:32:16.772573] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.096 [2024-07-15 19:32:16.772633] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.096 [2024-07-15 19:32:16.772647] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.096 [2024-07-15 19:32:16.772658] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.096 [2024-07-15 19:32:16.772666] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.096 [2024-07-15 19:32:16.772811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.096 [2024-07-15 19:32:16.773129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.096 [2024-07-15 19:32:16.773665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.096 [2024-07-15 19:32:16.773723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:28.029 [2024-07-15 19:32:17.578172] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:28.029 Malloc0 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:28.029 [2024-07-15 19:32:17.672321] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:28.029 [ 00:16:28.029 { 00:16:28.029 "allow_any_host": true, 00:16:28.029 "hosts": [], 00:16:28.029 "listen_addresses": [ 00:16:28.029 { 00:16:28.029 "adrfam": "IPv4", 00:16:28.029 "traddr": "10.0.0.2", 00:16:28.029 "trsvcid": "4420", 00:16:28.029 "trtype": "TCP" 00:16:28.029 } 00:16:28.029 ], 00:16:28.029 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:28.029 "subtype": "Discovery" 00:16:28.029 }, 00:16:28.029 { 00:16:28.029 "allow_any_host": true, 00:16:28.029 "hosts": [], 00:16:28.029 "listen_addresses": [ 00:16:28.029 { 00:16:28.029 "adrfam": "IPv4", 00:16:28.029 "traddr": "10.0.0.2", 00:16:28.029 "trsvcid": "4420", 00:16:28.029 "trtype": "TCP" 00:16:28.029 } 00:16:28.029 ], 00:16:28.029 "max_cntlid": 65519, 00:16:28.029 "max_namespaces": 32, 00:16:28.029 "min_cntlid": 1, 00:16:28.029 "model_number": "SPDK bdev Controller", 00:16:28.029 "namespaces": [ 00:16:28.029 { 00:16:28.029 "bdev_name": "Malloc0", 00:16:28.029 "eui64": "ABCDEF0123456789", 00:16:28.029 "name": "Malloc0", 00:16:28.029 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:28.029 "nsid": 1, 00:16:28.029 "uuid": "aa95ad36-b7fe-45b6-81f9-c3df619aeeb3" 00:16:28.029 } 00:16:28.029 ], 00:16:28.029 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:28.029 "serial_number": "SPDK00000000000001", 00:16:28.029 "subtype": "NVMe" 00:16:28.029 } 00:16:28.029 ] 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.029 19:32:17 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:28.029 [2024-07-15 19:32:17.726212] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:16:28.029 [2024-07-15 19:32:17.726292] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86779 ] 00:16:28.291 [2024-07-15 19:32:17.866671] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:16:28.291 [2024-07-15 19:32:17.866745] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:28.291 [2024-07-15 19:32:17.866753] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:28.291 [2024-07-15 19:32:17.866767] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:28.291 [2024-07-15 19:32:17.866774] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:28.291 [2024-07-15 19:32:17.866921] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:16:28.291 [2024-07-15 19:32:17.866979] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f27c00 0 00:16:28.291 [2024-07-15 19:32:17.879380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:28.291 [2024-07-15 19:32:17.879408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:28.291 [2024-07-15 19:32:17.879415] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:28.291 [2024-07-15 19:32:17.879419] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:28.291 [2024-07-15 19:32:17.879468] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.291 [2024-07-15 19:32:17.879475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.291 [2024-07-15 19:32:17.879480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f27c00) 00:16:28.291 [2024-07-15 19:32:17.879495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:28.291 [2024-07-15 19:32:17.879529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6a9c0, cid 0, qid 0 00:16:28.291 [2024-07-15 19:32:17.887374] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.291 [2024-07-15 19:32:17.887396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.291 [2024-07-15 19:32:17.887402] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.291 [2024-07-15 19:32:17.887407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6a9c0) on tqpair=0x1f27c00 00:16:28.291 [2024-07-15 19:32:17.887422] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:28.291 [2024-07-15 19:32:17.887432] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:16:28.291 [2024-07-15 19:32:17.887439] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:16:28.291 [2024-07-15 19:32:17.887457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.291 [2024-07-15 19:32:17.887463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.291 [2024-07-15 19:32:17.887467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f27c00) 00:16:28.291 [2024-07-15 19:32:17.887477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.291 [2024-07-15 19:32:17.887508] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6a9c0, cid 0, qid 0 00:16:28.291 [2024-07-15 19:32:17.887583] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.291 [2024-07-15 19:32:17.887590] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.291 [2024-07-15 19:32:17.887595] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.887599] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6a9c0) on tqpair=0x1f27c00 00:16:28.292 [2024-07-15 19:32:17.887605] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:16:28.292 [2024-07-15 19:32:17.887613] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:16:28.292 [2024-07-15 19:32:17.887622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.887626] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.887630] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f27c00) 00:16:28.292 [2024-07-15 19:32:17.887638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.292 [2024-07-15 19:32:17.887660] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6a9c0, cid 0, qid 0 00:16:28.292 [2024-07-15 19:32:17.887716] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.292 [2024-07-15 19:32:17.887723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.292 [2024-07-15 19:32:17.887727] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.887732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6a9c0) on tqpair=0x1f27c00 00:16:28.292 [2024-07-15 19:32:17.887738] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:16:28.292 [2024-07-15 19:32:17.887747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:16:28.292 [2024-07-15 19:32:17.887755] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.887759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.887763] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f27c00) 00:16:28.292 [2024-07-15 19:32:17.887771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.292 [2024-07-15 19:32:17.887791] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6a9c0, cid 0, qid 0 00:16:28.292 [2024-07-15 19:32:17.887843] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.292 [2024-07-15 19:32:17.887851] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.292 [2024-07-15 19:32:17.887855] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.887859] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6a9c0) on tqpair=0x1f27c00 00:16:28.292 [2024-07-15 19:32:17.887865] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:28.292 [2024-07-15 19:32:17.887876] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.887881] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.887885] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f27c00) 00:16:28.292 [2024-07-15 19:32:17.887892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.292 [2024-07-15 19:32:17.887912] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6a9c0, cid 0, qid 0 00:16:28.292 [2024-07-15 19:32:17.887968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.292 [2024-07-15 19:32:17.887975] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.292 [2024-07-15 19:32:17.887979] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.887983] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6a9c0) on tqpair=0x1f27c00 00:16:28.292 [2024-07-15 19:32:17.887989] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:16:28.292 [2024-07-15 19:32:17.887994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:16:28.292 [2024-07-15 19:32:17.888002] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:28.292 [2024-07-15 19:32:17.888109] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:16:28.292 [2024-07-15 19:32:17.888115] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:28.292 [2024-07-15 19:32:17.888124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.888133] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.888137] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f27c00) 00:16:28.292 [2024-07-15 19:32:17.888145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.292 [2024-07-15 19:32:17.888166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6a9c0, cid 0, qid 0 00:16:28.292 [2024-07-15 19:32:17.888219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.292 [2024-07-15 19:32:17.888232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.292 [2024-07-15 19:32:17.888237] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.888242] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6a9c0) on tqpair=0x1f27c00 00:16:28.292 [2024-07-15 19:32:17.888247] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:28.292 [2024-07-15 19:32:17.888259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.888264] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.888268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f27c00) 00:16:28.292 [2024-07-15 19:32:17.888276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.292 [2024-07-15 19:32:17.888297] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6a9c0, cid 0, qid 0 00:16:28.292 [2024-07-15 19:32:17.888353] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.292 [2024-07-15 19:32:17.888378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.292 [2024-07-15 19:32:17.888383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.888388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6a9c0) on tqpair=0x1f27c00 00:16:28.292 [2024-07-15 19:32:17.888393] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:28.292 [2024-07-15 19:32:17.888399] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:16:28.292 [2024-07-15 19:32:17.888408] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:16:28.292 [2024-07-15 19:32:17.888419] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:16:28.292 [2024-07-15 19:32:17.888430] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.888435] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f27c00) 00:16:28.292 [2024-07-15 19:32:17.888444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.292 [2024-07-15 19:32:17.888468] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6a9c0, cid 0, qid 0 00:16:28.292 [2024-07-15 19:32:17.888559] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:28.292 [2024-07-15 19:32:17.888566] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:28.292 [2024-07-15 19:32:17.888570] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.888574] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f27c00): datao=0, datal=4096, cccid=0 00:16:28.292 [2024-07-15 19:32:17.888580] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f6a9c0) on tqpair(0x1f27c00): expected_datao=0, payload_size=4096 00:16:28.292 [2024-07-15 19:32:17.888585] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.888594] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.888599] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.888608] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.292 [2024-07-15 19:32:17.888615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.292 [2024-07-15 19:32:17.888619] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.888623] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6a9c0) on tqpair=0x1f27c00 00:16:28.292 [2024-07-15 19:32:17.888633] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:16:28.292 [2024-07-15 19:32:17.888638] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:16:28.292 [2024-07-15 19:32:17.888643] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:16:28.292 [2024-07-15 19:32:17.888649] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:16:28.292 [2024-07-15 19:32:17.888654] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:16:28.292 [2024-07-15 19:32:17.888659] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:16:28.292 [2024-07-15 19:32:17.888669] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:16:28.292 [2024-07-15 19:32:17.888677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.292 [2024-07-15 19:32:17.888682] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.888686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f27c00) 00:16:28.293 [2024-07-15 19:32:17.888694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.293 [2024-07-15 19:32:17.888715] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6a9c0, cid 0, qid 0 00:16:28.293 [2024-07-15 19:32:17.888781] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.293 [2024-07-15 19:32:17.888788] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.293 [2024-07-15 19:32:17.888792] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.888796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6a9c0) on tqpair=0x1f27c00 00:16:28.293 [2024-07-15 19:32:17.888805] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.888809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.888814] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f27c00) 00:16:28.293 [2024-07-15 19:32:17.888821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.293 [2024-07-15 19:32:17.888828] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.888832] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.888836] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f27c00) 00:16:28.293 [2024-07-15 19:32:17.888842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.293 [2024-07-15 19:32:17.888849] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.888853] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.888857] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f27c00) 00:16:28.293 [2024-07-15 19:32:17.888863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.293 [2024-07-15 19:32:17.888870] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.888874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.888878] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.293 [2024-07-15 19:32:17.888884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.293 [2024-07-15 19:32:17.888889] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:16:28.293 [2024-07-15 19:32:17.888904] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:28.293 [2024-07-15 19:32:17.888912] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.888916] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f27c00) 00:16:28.293 [2024-07-15 19:32:17.888924] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.293 [2024-07-15 19:32:17.888947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6a9c0, cid 0, qid 0 00:16:28.293 [2024-07-15 19:32:17.888954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ab40, cid 1, qid 0 00:16:28.293 [2024-07-15 19:32:17.888960] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6acc0, cid 2, qid 0 00:16:28.293 [2024-07-15 19:32:17.888965] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.293 [2024-07-15 19:32:17.888970] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6afc0, cid 4, qid 0 00:16:28.293 [2024-07-15 19:32:17.889065] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.293 [2024-07-15 19:32:17.889072] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.293 [2024-07-15 19:32:17.889076] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.889080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6afc0) on tqpair=0x1f27c00 00:16:28.293 [2024-07-15 19:32:17.889086] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:16:28.293 [2024-07-15 19:32:17.889096] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:16:28.293 [2024-07-15 19:32:17.889109] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.889114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f27c00) 00:16:28.293 [2024-07-15 19:32:17.889122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.293 [2024-07-15 19:32:17.889142] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6afc0, cid 4, qid 0 00:16:28.293 [2024-07-15 19:32:17.889208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:28.293 [2024-07-15 19:32:17.889215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:28.293 [2024-07-15 19:32:17.889219] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.889223] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f27c00): datao=0, datal=4096, cccid=4 00:16:28.293 [2024-07-15 19:32:17.889229] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f6afc0) on tqpair(0x1f27c00): expected_datao=0, payload_size=4096 00:16:28.293 [2024-07-15 19:32:17.889234] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.889241] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.889245] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.889254] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.293 [2024-07-15 19:32:17.889261] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.293 [2024-07-15 19:32:17.889265] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.889269] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6afc0) on tqpair=0x1f27c00 00:16:28.293 [2024-07-15 19:32:17.889290] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:16:28.293 [2024-07-15 19:32:17.889320] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.889326] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f27c00) 00:16:28.293 [2024-07-15 19:32:17.889334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.293 [2024-07-15 19:32:17.889342] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.889347] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.889351] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f27c00) 00:16:28.293 [2024-07-15 19:32:17.889370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.293 [2024-07-15 19:32:17.889400] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6afc0, cid 4, qid 0 00:16:28.293 [2024-07-15 19:32:17.889409] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6b140, cid 5, qid 0 00:16:28.293 [2024-07-15 19:32:17.889518] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:28.293 [2024-07-15 19:32:17.889525] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:28.293 [2024-07-15 19:32:17.889529] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.889534] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f27c00): datao=0, datal=1024, cccid=4 00:16:28.293 [2024-07-15 19:32:17.889539] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f6afc0) on tqpair(0x1f27c00): expected_datao=0, payload_size=1024 00:16:28.293 [2024-07-15 19:32:17.889544] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.889551] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.889555] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.889561] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.293 [2024-07-15 19:32:17.889567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.293 [2024-07-15 19:32:17.889571] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.889576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6b140) on tqpair=0x1f27c00 00:16:28.293 [2024-07-15 19:32:17.930425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.293 [2024-07-15 19:32:17.930451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.293 [2024-07-15 19:32:17.930457] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.293 [2024-07-15 19:32:17.930462] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6afc0) on tqpair=0x1f27c00 00:16:28.293 [2024-07-15 19:32:17.930480] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.294 [2024-07-15 19:32:17.930485] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f27c00) 00:16:28.294 [2024-07-15 19:32:17.930496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.294 [2024-07-15 19:32:17.930531] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6afc0, cid 4, qid 0 00:16:28.294 [2024-07-15 19:32:17.930624] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:28.294 [2024-07-15 19:32:17.930631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:28.294 [2024-07-15 19:32:17.930635] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:28.294 [2024-07-15 19:32:17.930639] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f27c00): datao=0, datal=3072, cccid=4 00:16:28.294 [2024-07-15 19:32:17.930650] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f6afc0) on tqpair(0x1f27c00): expected_datao=0, payload_size=3072 00:16:28.294 [2024-07-15 19:32:17.930655] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.294 [2024-07-15 19:32:17.930663] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:28.294 [2024-07-15 19:32:17.930668] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:28.294 [2024-07-15 19:32:17.930677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.294 [2024-07-15 19:32:17.930683] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.294 [2024-07-15 19:32:17.930687] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.294 [2024-07-15 19:32:17.930691] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6afc0) on tqpair=0x1f27c00 00:16:28.294 [2024-07-15 19:32:17.930703] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.294 [2024-07-15 19:32:17.930708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f27c00) 00:16:28.294 [2024-07-15 19:32:17.930716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.294 [2024-07-15 19:32:17.930743] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6afc0, cid 4, qid 0 00:16:28.294 [2024-07-15 19:32:17.930819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:28.294 [2024-07-15 19:32:17.930826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:28.294 [2024-07-15 19:32:17.930830] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:28.294 [2024-07-15 19:32:17.930834] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f27c00): datao=0, datal=8, cccid=4 00:16:28.294 [2024-07-15 19:32:17.930840] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f6afc0) on tqpair(0x1f27c00): expected_datao=0, payload_size=8 00:16:28.294 [2024-07-15 19:32:17.930844] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.294 [2024-07-15 19:32:17.930852] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:28.294 [2024-07-15 19:32:17.930856] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:28.294 ===================================================== 00:16:28.294 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:28.294 ===================================================== 00:16:28.294 Controller Capabilities/Features 00:16:28.294 ================================ 00:16:28.294 Vendor ID: 0000 00:16:28.294 Subsystem Vendor ID: 0000 00:16:28.294 Serial Number: .................... 00:16:28.294 Model Number: ........................................ 00:16:28.294 Firmware Version: 24.09 00:16:28.294 Recommended Arb Burst: 0 00:16:28.294 IEEE OUI Identifier: 00 00 00 00:16:28.294 Multi-path I/O 00:16:28.294 May have multiple subsystem ports: No 00:16:28.294 May have multiple controllers: No 00:16:28.294 Associated with SR-IOV VF: No 00:16:28.294 Max Data Transfer Size: 131072 00:16:28.294 Max Number of Namespaces: 0 00:16:28.294 Max Number of I/O Queues: 1024 00:16:28.294 NVMe Specification Version (VS): 1.3 00:16:28.294 NVMe Specification Version (Identify): 1.3 00:16:28.294 Maximum Queue Entries: 128 00:16:28.294 Contiguous Queues Required: Yes 00:16:28.294 Arbitration Mechanisms Supported 00:16:28.294 Weighted Round Robin: Not Supported 00:16:28.294 Vendor Specific: Not Supported 00:16:28.294 Reset Timeout: 15000 ms 00:16:28.294 Doorbell Stride: 4 bytes 00:16:28.294 NVM Subsystem Reset: Not Supported 00:16:28.294 Command Sets Supported 00:16:28.294 NVM Command Set: Supported 00:16:28.294 Boot Partition: Not Supported 00:16:28.294 Memory Page Size Minimum: 4096 bytes 00:16:28.294 Memory Page Size Maximum: 4096 bytes 00:16:28.294 Persistent Memory Region: Not Supported 00:16:28.294 Optional Asynchronous Events Supported 00:16:28.294 Namespace Attribute Notices: Not Supported 00:16:28.294 Firmware Activation Notices: Not Supported 00:16:28.294 ANA Change Notices: Not Supported 00:16:28.294 PLE Aggregate Log Change Notices: Not Supported 00:16:28.294 LBA Status Info Alert Notices: Not Supported 00:16:28.294 EGE Aggregate Log Change Notices: Not Supported 00:16:28.294 Normal NVM Subsystem Shutdown event: Not Supported 00:16:28.294 Zone Descriptor Change Notices: Not Supported 00:16:28.294 Discovery Log Change Notices: Supported 00:16:28.294 Controller Attributes 00:16:28.294 128-bit Host Identifier: Not Supported 00:16:28.294 Non-Operational Permissive Mode: Not Supported 00:16:28.294 NVM Sets: Not Supported 00:16:28.294 Read Recovery Levels: Not Supported 00:16:28.294 Endurance Groups: Not Supported 00:16:28.294 Predictable Latency Mode: Not Supported 00:16:28.294 Traffic Based Keep ALive: Not Supported 00:16:28.294 Namespace Granularity: Not Supported 00:16:28.294 SQ Associations: Not Supported 00:16:28.294 UUID List: Not Supported 00:16:28.294 Multi-Domain Subsystem: Not Supported 00:16:28.294 Fixed Capacity Management: Not Supported 00:16:28.294 Variable Capacity Management: Not Supported 00:16:28.294 Delete Endurance Group: Not Supported 00:16:28.294 Delete NVM Set: Not Supported 00:16:28.294 Extended LBA Formats Supported: Not Supported 00:16:28.294 Flexible Data Placement Supported: Not Supported 00:16:28.294 00:16:28.294 Controller Memory Buffer Support 00:16:28.294 ================================ 00:16:28.294 Supported: No 00:16:28.294 00:16:28.294 Persistent Memory Region Support 00:16:28.294 ================================ 00:16:28.294 Supported: No 00:16:28.294 00:16:28.294 Admin Command Set Attributes 00:16:28.294 ============================ 00:16:28.294 Security Send/Receive: Not Supported 00:16:28.294 Format NVM: Not Supported 00:16:28.294 Firmware Activate/Download: Not Supported 00:16:28.294 Namespace Management: Not Supported 00:16:28.294 Device Self-Test: Not Supported 00:16:28.294 Directives: Not Supported 00:16:28.294 NVMe-MI: Not Supported 00:16:28.294 Virtualization Management: Not Supported 00:16:28.294 Doorbell Buffer Config: Not Supported 00:16:28.294 Get LBA Status Capability: Not Supported 00:16:28.294 Command & Feature Lockdown Capability: Not Supported 00:16:28.294 Abort Command Limit: 1 00:16:28.294 Async Event Request Limit: 4 00:16:28.294 Number of Firmware Slots: N/A 00:16:28.294 Firmware Slot 1 Read-Only: N/A 00:16:28.294 Firm[2024-07-15 19:32:17.975392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.294 [2024-07-15 19:32:17.975426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.294 [2024-07-15 19:32:17.975432] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.294 [2024-07-15 19:32:17.975438] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6afc0) on tqpair=0x1f27c00 00:16:28.294 ware Activation Without Reset: N/A 00:16:28.294 Multiple Update Detection Support: N/A 00:16:28.294 Firmware Update Granularity: No Information Provided 00:16:28.294 Per-Namespace SMART Log: No 00:16:28.294 Asymmetric Namespace Access Log Page: Not Supported 00:16:28.295 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:28.295 Command Effects Log Page: Not Supported 00:16:28.295 Get Log Page Extended Data: Supported 00:16:28.295 Telemetry Log Pages: Not Supported 00:16:28.295 Persistent Event Log Pages: Not Supported 00:16:28.295 Supported Log Pages Log Page: May Support 00:16:28.295 Commands Supported & Effects Log Page: Not Supported 00:16:28.295 Feature Identifiers & Effects Log Page:May Support 00:16:28.295 NVMe-MI Commands & Effects Log Page: May Support 00:16:28.295 Data Area 4 for Telemetry Log: Not Supported 00:16:28.295 Error Log Page Entries Supported: 128 00:16:28.295 Keep Alive: Not Supported 00:16:28.295 00:16:28.295 NVM Command Set Attributes 00:16:28.295 ========================== 00:16:28.295 Submission Queue Entry Size 00:16:28.295 Max: 1 00:16:28.295 Min: 1 00:16:28.295 Completion Queue Entry Size 00:16:28.295 Max: 1 00:16:28.295 Min: 1 00:16:28.295 Number of Namespaces: 0 00:16:28.295 Compare Command: Not Supported 00:16:28.295 Write Uncorrectable Command: Not Supported 00:16:28.295 Dataset Management Command: Not Supported 00:16:28.295 Write Zeroes Command: Not Supported 00:16:28.295 Set Features Save Field: Not Supported 00:16:28.295 Reservations: Not Supported 00:16:28.295 Timestamp: Not Supported 00:16:28.295 Copy: Not Supported 00:16:28.295 Volatile Write Cache: Not Present 00:16:28.295 Atomic Write Unit (Normal): 1 00:16:28.295 Atomic Write Unit (PFail): 1 00:16:28.295 Atomic Compare & Write Unit: 1 00:16:28.295 Fused Compare & Write: Supported 00:16:28.295 Scatter-Gather List 00:16:28.295 SGL Command Set: Supported 00:16:28.295 SGL Keyed: Supported 00:16:28.295 SGL Bit Bucket Descriptor: Not Supported 00:16:28.295 SGL Metadata Pointer: Not Supported 00:16:28.295 Oversized SGL: Not Supported 00:16:28.295 SGL Metadata Address: Not Supported 00:16:28.295 SGL Offset: Supported 00:16:28.295 Transport SGL Data Block: Not Supported 00:16:28.295 Replay Protected Memory Block: Not Supported 00:16:28.295 00:16:28.295 Firmware Slot Information 00:16:28.295 ========================= 00:16:28.295 Active slot: 0 00:16:28.295 00:16:28.295 00:16:28.295 Error Log 00:16:28.295 ========= 00:16:28.295 00:16:28.295 Active Namespaces 00:16:28.295 ================= 00:16:28.295 Discovery Log Page 00:16:28.295 ================== 00:16:28.295 Generation Counter: 2 00:16:28.295 Number of Records: 2 00:16:28.295 Record Format: 0 00:16:28.295 00:16:28.295 Discovery Log Entry 0 00:16:28.295 ---------------------- 00:16:28.295 Transport Type: 3 (TCP) 00:16:28.295 Address Family: 1 (IPv4) 00:16:28.295 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:28.295 Entry Flags: 00:16:28.295 Duplicate Returned Information: 1 00:16:28.295 Explicit Persistent Connection Support for Discovery: 1 00:16:28.295 Transport Requirements: 00:16:28.295 Secure Channel: Not Required 00:16:28.295 Port ID: 0 (0x0000) 00:16:28.295 Controller ID: 65535 (0xffff) 00:16:28.295 Admin Max SQ Size: 128 00:16:28.295 Transport Service Identifier: 4420 00:16:28.295 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:28.295 Transport Address: 10.0.0.2 00:16:28.295 Discovery Log Entry 1 00:16:28.295 ---------------------- 00:16:28.295 Transport Type: 3 (TCP) 00:16:28.295 Address Family: 1 (IPv4) 00:16:28.295 Subsystem Type: 2 (NVM Subsystem) 00:16:28.295 Entry Flags: 00:16:28.295 Duplicate Returned Information: 0 00:16:28.295 Explicit Persistent Connection Support for Discovery: 0 00:16:28.295 Transport Requirements: 00:16:28.295 Secure Channel: Not Required 00:16:28.295 Port ID: 0 (0x0000) 00:16:28.295 Controller ID: 65535 (0xffff) 00:16:28.295 Admin Max SQ Size: 128 00:16:28.295 Transport Service Identifier: 4420 00:16:28.295 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:28.295 Transport Address: 10.0.0.2 [2024-07-15 19:32:17.975558] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:16:28.295 [2024-07-15 19:32:17.975574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6a9c0) on tqpair=0x1f27c00 00:16:28.295 [2024-07-15 19:32:17.975582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.295 [2024-07-15 19:32:17.975589] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ab40) on tqpair=0x1f27c00 00:16:28.295 [2024-07-15 19:32:17.975594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.295 [2024-07-15 19:32:17.975600] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6acc0) on tqpair=0x1f27c00 00:16:28.295 [2024-07-15 19:32:17.975605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.295 [2024-07-15 19:32:17.975619] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.295 [2024-07-15 19:32:17.975624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.295 [2024-07-15 19:32:17.975636] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.295 [2024-07-15 19:32:17.975641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.295 [2024-07-15 19:32:17.975645] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.295 [2024-07-15 19:32:17.975656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.295 [2024-07-15 19:32:17.975684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.295 [2024-07-15 19:32:17.975757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.295 [2024-07-15 19:32:17.975765] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.295 [2024-07-15 19:32:17.975769] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.295 [2024-07-15 19:32:17.975773] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.295 [2024-07-15 19:32:17.975782] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.295 [2024-07-15 19:32:17.975787] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.295 [2024-07-15 19:32:17.975791] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.295 [2024-07-15 19:32:17.975798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.295 [2024-07-15 19:32:17.975824] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.295 [2024-07-15 19:32:17.975900] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.295 [2024-07-15 19:32:17.975907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.295 [2024-07-15 19:32:17.975911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.295 [2024-07-15 19:32:17.975915] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.296 [2024-07-15 19:32:17.975921] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:16:28.296 [2024-07-15 19:32:17.975926] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:16:28.296 [2024-07-15 19:32:17.975937] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.975942] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.975946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.296 [2024-07-15 19:32:17.975954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.296 [2024-07-15 19:32:17.975974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.296 [2024-07-15 19:32:17.976030] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.296 [2024-07-15 19:32:17.976037] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.296 [2024-07-15 19:32:17.976041] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976045] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.296 [2024-07-15 19:32:17.976057] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976062] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976066] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.296 [2024-07-15 19:32:17.976074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.296 [2024-07-15 19:32:17.976093] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.296 [2024-07-15 19:32:17.976148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.296 [2024-07-15 19:32:17.976155] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.296 [2024-07-15 19:32:17.976159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.296 [2024-07-15 19:32:17.976182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976191] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.296 [2024-07-15 19:32:17.976198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.296 [2024-07-15 19:32:17.976218] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.296 [2024-07-15 19:32:17.976274] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.296 [2024-07-15 19:32:17.976281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.296 [2024-07-15 19:32:17.976285] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.296 [2024-07-15 19:32:17.976300] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976305] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.296 [2024-07-15 19:32:17.976317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.296 [2024-07-15 19:32:17.976336] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.296 [2024-07-15 19:32:17.976405] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.296 [2024-07-15 19:32:17.976414] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.296 [2024-07-15 19:32:17.976418] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976423] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.296 [2024-07-15 19:32:17.976434] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976439] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976443] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.296 [2024-07-15 19:32:17.976451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.296 [2024-07-15 19:32:17.976473] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.296 [2024-07-15 19:32:17.976529] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.296 [2024-07-15 19:32:17.976536] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.296 [2024-07-15 19:32:17.976540] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976544] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.296 [2024-07-15 19:32:17.976555] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.296 [2024-07-15 19:32:17.976572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.296 [2024-07-15 19:32:17.976591] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.296 [2024-07-15 19:32:17.976645] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.296 [2024-07-15 19:32:17.976652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.296 [2024-07-15 19:32:17.976656] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976661] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.296 [2024-07-15 19:32:17.976672] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976677] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976681] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.296 [2024-07-15 19:32:17.976688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.296 [2024-07-15 19:32:17.976708] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.296 [2024-07-15 19:32:17.976761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.296 [2024-07-15 19:32:17.976768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.296 [2024-07-15 19:32:17.976772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976776] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.296 [2024-07-15 19:32:17.976787] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976792] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976796] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.296 [2024-07-15 19:32:17.976804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.296 [2024-07-15 19:32:17.976823] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.296 [2024-07-15 19:32:17.976876] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.296 [2024-07-15 19:32:17.976883] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.296 [2024-07-15 19:32:17.976887] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976891] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.296 [2024-07-15 19:32:17.976902] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976907] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.296 [2024-07-15 19:32:17.976911] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.296 [2024-07-15 19:32:17.976919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.296 [2024-07-15 19:32:17.976938] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.296 [2024-07-15 19:32:17.976991] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.296 [2024-07-15 19:32:17.976998] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.297 [2024-07-15 19:32:17.977002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977006] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.297 [2024-07-15 19:32:17.977017] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.297 [2024-07-15 19:32:17.977034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.297 [2024-07-15 19:32:17.977053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.297 [2024-07-15 19:32:17.977106] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.297 [2024-07-15 19:32:17.977113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.297 [2024-07-15 19:32:17.977117] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977121] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.297 [2024-07-15 19:32:17.977132] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.297 [2024-07-15 19:32:17.977149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.297 [2024-07-15 19:32:17.977168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.297 [2024-07-15 19:32:17.977221] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.297 [2024-07-15 19:32:17.977228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.297 [2024-07-15 19:32:17.977232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977236] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.297 [2024-07-15 19:32:17.977247] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977252] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977256] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.297 [2024-07-15 19:32:17.977264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.297 [2024-07-15 19:32:17.977283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.297 [2024-07-15 19:32:17.977336] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.297 [2024-07-15 19:32:17.977343] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.297 [2024-07-15 19:32:17.977347] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977352] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.297 [2024-07-15 19:32:17.977373] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977380] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.297 [2024-07-15 19:32:17.977398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.297 [2024-07-15 19:32:17.977420] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.297 [2024-07-15 19:32:17.977479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.297 [2024-07-15 19:32:17.977487] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.297 [2024-07-15 19:32:17.977491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.297 [2024-07-15 19:32:17.977506] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977511] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977515] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.297 [2024-07-15 19:32:17.977523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.297 [2024-07-15 19:32:17.977542] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.297 [2024-07-15 19:32:17.977598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.297 [2024-07-15 19:32:17.977605] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.297 [2024-07-15 19:32:17.977609] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.297 [2024-07-15 19:32:17.977624] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977630] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977634] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.297 [2024-07-15 19:32:17.977641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.297 [2024-07-15 19:32:17.977660] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.297 [2024-07-15 19:32:17.977716] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.297 [2024-07-15 19:32:17.977723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.297 [2024-07-15 19:32:17.977727] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.297 [2024-07-15 19:32:17.977742] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.297 [2024-07-15 19:32:17.977759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.297 [2024-07-15 19:32:17.977778] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.297 [2024-07-15 19:32:17.977831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.297 [2024-07-15 19:32:17.977838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.297 [2024-07-15 19:32:17.977842] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.297 [2024-07-15 19:32:17.977847] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.298 [2024-07-15 19:32:17.977858] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.977863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.977867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.298 [2024-07-15 19:32:17.977874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.298 [2024-07-15 19:32:17.977893] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.298 [2024-07-15 19:32:17.977955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.298 [2024-07-15 19:32:17.977962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.298 [2024-07-15 19:32:17.977966] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.977970] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.298 [2024-07-15 19:32:17.977982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.977987] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.977991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.298 [2024-07-15 19:32:17.977998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.298 [2024-07-15 19:32:17.978017] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.298 [2024-07-15 19:32:17.978074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.298 [2024-07-15 19:32:17.978081] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.298 [2024-07-15 19:32:17.978085] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.298 [2024-07-15 19:32:17.978100] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978105] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978109] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.298 [2024-07-15 19:32:17.978117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.298 [2024-07-15 19:32:17.978136] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.298 [2024-07-15 19:32:17.978192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.298 [2024-07-15 19:32:17.978199] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.298 [2024-07-15 19:32:17.978203] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.298 [2024-07-15 19:32:17.978219] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978224] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978228] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.298 [2024-07-15 19:32:17.978246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.298 [2024-07-15 19:32:17.978268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.298 [2024-07-15 19:32:17.978326] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.298 [2024-07-15 19:32:17.978333] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.298 [2024-07-15 19:32:17.978337] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978341] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.298 [2024-07-15 19:32:17.978353] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978374] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.298 [2024-07-15 19:32:17.978382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.298 [2024-07-15 19:32:17.978405] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.298 [2024-07-15 19:32:17.978474] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.298 [2024-07-15 19:32:17.978481] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.298 [2024-07-15 19:32:17.978485] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978490] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.298 [2024-07-15 19:32:17.978501] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978506] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978510] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.298 [2024-07-15 19:32:17.978518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.298 [2024-07-15 19:32:17.978537] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.298 [2024-07-15 19:32:17.978591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.298 [2024-07-15 19:32:17.978598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.298 [2024-07-15 19:32:17.978602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978606] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.298 [2024-07-15 19:32:17.978617] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978622] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978626] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.298 [2024-07-15 19:32:17.978634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.298 [2024-07-15 19:32:17.978653] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.298 [2024-07-15 19:32:17.978708] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.298 [2024-07-15 19:32:17.978715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.298 [2024-07-15 19:32:17.978719] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978723] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.298 [2024-07-15 19:32:17.978734] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978739] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978743] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.298 [2024-07-15 19:32:17.978751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.298 [2024-07-15 19:32:17.978770] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.298 [2024-07-15 19:32:17.978829] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.298 [2024-07-15 19:32:17.978836] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.298 [2024-07-15 19:32:17.978840] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978844] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.298 [2024-07-15 19:32:17.978855] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978860] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978864] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.298 [2024-07-15 19:32:17.978872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.298 [2024-07-15 19:32:17.978891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.298 [2024-07-15 19:32:17.978945] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.298 [2024-07-15 19:32:17.978952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.298 [2024-07-15 19:32:17.978957] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978961] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.298 [2024-07-15 19:32:17.978972] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.298 [2024-07-15 19:32:17.978981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.298 [2024-07-15 19:32:17.978989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.299 [2024-07-15 19:32:17.979008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.299 [2024-07-15 19:32:17.979062] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.299 [2024-07-15 19:32:17.979074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.299 [2024-07-15 19:32:17.979079] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.299 [2024-07-15 19:32:17.979083] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.299 [2024-07-15 19:32:17.979095] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.299 [2024-07-15 19:32:17.979100] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.299 [2024-07-15 19:32:17.979104] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.299 [2024-07-15 19:32:17.979112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.299 [2024-07-15 19:32:17.979133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.299 [2024-07-15 19:32:17.979189] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.299 [2024-07-15 19:32:17.979201] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.299 [2024-07-15 19:32:17.979206] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.299 [2024-07-15 19:32:17.979210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.299 [2024-07-15 19:32:17.979222] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.299 [2024-07-15 19:32:17.979227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.299 [2024-07-15 19:32:17.979231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.299 [2024-07-15 19:32:17.979239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.299 [2024-07-15 19:32:17.979259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.299 [2024-07-15 19:32:17.979313] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.299 [2024-07-15 19:32:17.979320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.299 [2024-07-15 19:32:17.979324] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.299 [2024-07-15 19:32:17.979328] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.299 [2024-07-15 19:32:17.979339] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.299 [2024-07-15 19:32:17.979344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.299 [2024-07-15 19:32:17.979348] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f27c00) 00:16:28.299 [2024-07-15 19:32:17.983367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.299 [2024-07-15 19:32:17.983407] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f6ae40, cid 3, qid 0 00:16:28.299 [2024-07-15 19:32:17.983478] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.299 [2024-07-15 19:32:17.983486] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.299 [2024-07-15 19:32:17.983490] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.299 [2024-07-15 19:32:17.983495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f6ae40) on tqpair=0x1f27c00 00:16:28.299 [2024-07-15 19:32:17.983504] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:16:28.299 00:16:28.299 19:32:18 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:28.299 [2024-07-15 19:32:18.021753] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:16:28.299 [2024-07-15 19:32:18.021807] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86781 ] 00:16:28.562 [2024-07-15 19:32:18.161613] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:16:28.562 [2024-07-15 19:32:18.161686] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:28.562 [2024-07-15 19:32:18.161694] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:28.562 [2024-07-15 19:32:18.161708] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:28.562 [2024-07-15 19:32:18.161716] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:28.562 [2024-07-15 19:32:18.161862] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:16:28.562 [2024-07-15 19:32:18.161914] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xaccc00 0 00:16:28.562 [2024-07-15 19:32:18.174379] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:28.562 [2024-07-15 19:32:18.174403] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:28.562 [2024-07-15 19:32:18.174410] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:28.562 [2024-07-15 19:32:18.174414] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:28.562 [2024-07-15 19:32:18.174462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.562 [2024-07-15 19:32:18.174470] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.562 [2024-07-15 19:32:18.174474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaccc00) 00:16:28.562 [2024-07-15 19:32:18.174489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:28.562 [2024-07-15 19:32:18.174522] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0f9c0, cid 0, qid 0 00:16:28.562 [2024-07-15 19:32:18.182376] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.562 [2024-07-15 19:32:18.182398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.562 [2024-07-15 19:32:18.182403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.562 [2024-07-15 19:32:18.182409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0f9c0) on tqpair=0xaccc00 00:16:28.562 [2024-07-15 19:32:18.182424] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:28.562 [2024-07-15 19:32:18.182433] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:16:28.562 [2024-07-15 19:32:18.182441] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:16:28.562 [2024-07-15 19:32:18.182459] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.562 [2024-07-15 19:32:18.182465] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.562 [2024-07-15 19:32:18.182469] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaccc00) 00:16:28.562 [2024-07-15 19:32:18.182479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.562 [2024-07-15 19:32:18.182509] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0f9c0, cid 0, qid 0 00:16:28.562 [2024-07-15 19:32:18.182596] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.562 [2024-07-15 19:32:18.182604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.562 [2024-07-15 19:32:18.182608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.562 [2024-07-15 19:32:18.182613] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0f9c0) on tqpair=0xaccc00 00:16:28.562 [2024-07-15 19:32:18.182619] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:16:28.562 [2024-07-15 19:32:18.182628] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:16:28.562 [2024-07-15 19:32:18.182636] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.562 [2024-07-15 19:32:18.182641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.562 [2024-07-15 19:32:18.182645] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaccc00) 00:16:28.562 [2024-07-15 19:32:18.182653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.562 [2024-07-15 19:32:18.182675] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0f9c0, cid 0, qid 0 00:16:28.562 [2024-07-15 19:32:18.182734] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.562 [2024-07-15 19:32:18.182741] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.562 [2024-07-15 19:32:18.182745] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.562 [2024-07-15 19:32:18.182749] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0f9c0) on tqpair=0xaccc00 00:16:28.562 [2024-07-15 19:32:18.182756] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:16:28.562 [2024-07-15 19:32:18.182765] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:16:28.562 [2024-07-15 19:32:18.182773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.562 [2024-07-15 19:32:18.182778] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.562 [2024-07-15 19:32:18.182782] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaccc00) 00:16:28.562 [2024-07-15 19:32:18.182789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.562 [2024-07-15 19:32:18.182809] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0f9c0, cid 0, qid 0 00:16:28.562 [2024-07-15 19:32:18.182863] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.562 [2024-07-15 19:32:18.182870] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.562 [2024-07-15 19:32:18.182874] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.562 [2024-07-15 19:32:18.182878] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0f9c0) on tqpair=0xaccc00 00:16:28.562 [2024-07-15 19:32:18.182884] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:28.562 [2024-07-15 19:32:18.182895] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.562 [2024-07-15 19:32:18.182901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.562 [2024-07-15 19:32:18.182905] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaccc00) 00:16:28.562 [2024-07-15 19:32:18.182913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.562 [2024-07-15 19:32:18.182932] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0f9c0, cid 0, qid 0 00:16:28.562 [2024-07-15 19:32:18.182988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.562 [2024-07-15 19:32:18.182995] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.562 [2024-07-15 19:32:18.182999] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.562 [2024-07-15 19:32:18.183004] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0f9c0) on tqpair=0xaccc00 00:16:28.562 [2024-07-15 19:32:18.183009] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:16:28.562 [2024-07-15 19:32:18.183015] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:16:28.562 [2024-07-15 19:32:18.183023] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:28.562 [2024-07-15 19:32:18.183129] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:16:28.563 [2024-07-15 19:32:18.183134] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:28.563 [2024-07-15 19:32:18.183144] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183149] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183153] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaccc00) 00:16:28.563 [2024-07-15 19:32:18.183161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.563 [2024-07-15 19:32:18.183181] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0f9c0, cid 0, qid 0 00:16:28.563 [2024-07-15 19:32:18.183238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.563 [2024-07-15 19:32:18.183245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.563 [2024-07-15 19:32:18.183249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183254] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0f9c0) on tqpair=0xaccc00 00:16:28.563 [2024-07-15 19:32:18.183259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:28.563 [2024-07-15 19:32:18.183270] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183275] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183279] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaccc00) 00:16:28.563 [2024-07-15 19:32:18.183287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.563 [2024-07-15 19:32:18.183306] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0f9c0, cid 0, qid 0 00:16:28.563 [2024-07-15 19:32:18.183378] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.563 [2024-07-15 19:32:18.183388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.563 [2024-07-15 19:32:18.183392] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0f9c0) on tqpair=0xaccc00 00:16:28.563 [2024-07-15 19:32:18.183402] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:28.563 [2024-07-15 19:32:18.183408] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:16:28.563 [2024-07-15 19:32:18.183417] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:16:28.563 [2024-07-15 19:32:18.183428] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:16:28.563 [2024-07-15 19:32:18.183440] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183444] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaccc00) 00:16:28.563 [2024-07-15 19:32:18.183453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.563 [2024-07-15 19:32:18.183477] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0f9c0, cid 0, qid 0 00:16:28.563 [2024-07-15 19:32:18.183575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:28.563 [2024-07-15 19:32:18.183583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:28.563 [2024-07-15 19:32:18.183587] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183591] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaccc00): datao=0, datal=4096, cccid=0 00:16:28.563 [2024-07-15 19:32:18.183597] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb0f9c0) on tqpair(0xaccc00): expected_datao=0, payload_size=4096 00:16:28.563 [2024-07-15 19:32:18.183602] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183611] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183616] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183625] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.563 [2024-07-15 19:32:18.183632] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.563 [2024-07-15 19:32:18.183636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183640] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0f9c0) on tqpair=0xaccc00 00:16:28.563 [2024-07-15 19:32:18.183649] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:16:28.563 [2024-07-15 19:32:18.183654] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:16:28.563 [2024-07-15 19:32:18.183659] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:16:28.563 [2024-07-15 19:32:18.183664] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:16:28.563 [2024-07-15 19:32:18.183669] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:16:28.563 [2024-07-15 19:32:18.183675] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:16:28.563 [2024-07-15 19:32:18.183684] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:16:28.563 [2024-07-15 19:32:18.183693] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaccc00) 00:16:28.563 [2024-07-15 19:32:18.183710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.563 [2024-07-15 19:32:18.183732] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0f9c0, cid 0, qid 0 00:16:28.563 [2024-07-15 19:32:18.183795] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.563 [2024-07-15 19:32:18.183802] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.563 [2024-07-15 19:32:18.183806] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183811] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0f9c0) on tqpair=0xaccc00 00:16:28.563 [2024-07-15 19:32:18.183819] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183823] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183827] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaccc00) 00:16:28.563 [2024-07-15 19:32:18.183835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.563 [2024-07-15 19:32:18.183841] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183850] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xaccc00) 00:16:28.563 [2024-07-15 19:32:18.183856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.563 [2024-07-15 19:32:18.183863] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183867] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xaccc00) 00:16:28.563 [2024-07-15 19:32:18.183877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.563 [2024-07-15 19:32:18.183884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.563 [2024-07-15 19:32:18.183898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.563 [2024-07-15 19:32:18.183904] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:28.563 [2024-07-15 19:32:18.183917] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:28.563 [2024-07-15 19:32:18.183926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.183930] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaccc00) 00:16:28.563 [2024-07-15 19:32:18.183938] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.563 [2024-07-15 19:32:18.183961] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0f9c0, cid 0, qid 0 00:16:28.563 [2024-07-15 19:32:18.183969] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fb40, cid 1, qid 0 00:16:28.563 [2024-07-15 19:32:18.183974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fcc0, cid 2, qid 0 00:16:28.563 [2024-07-15 19:32:18.183979] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.563 [2024-07-15 19:32:18.183984] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0ffc0, cid 4, qid 0 00:16:28.563 [2024-07-15 19:32:18.184079] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.563 [2024-07-15 19:32:18.184086] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.563 [2024-07-15 19:32:18.184090] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.184095] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0ffc0) on tqpair=0xaccc00 00:16:28.563 [2024-07-15 19:32:18.184100] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:16:28.563 [2024-07-15 19:32:18.184110] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:28.563 [2024-07-15 19:32:18.184120] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:16:28.563 [2024-07-15 19:32:18.184127] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:28.563 [2024-07-15 19:32:18.184134] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.184139] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.184143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaccc00) 00:16:28.563 [2024-07-15 19:32:18.184151] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.563 [2024-07-15 19:32:18.184171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0ffc0, cid 4, qid 0 00:16:28.563 [2024-07-15 19:32:18.184236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.563 [2024-07-15 19:32:18.184243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.563 [2024-07-15 19:32:18.184247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.184252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0ffc0) on tqpair=0xaccc00 00:16:28.563 [2024-07-15 19:32:18.184319] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:16:28.563 [2024-07-15 19:32:18.184339] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:28.563 [2024-07-15 19:32:18.184349] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.563 [2024-07-15 19:32:18.184354] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaccc00) 00:16:28.563 [2024-07-15 19:32:18.184375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.563 [2024-07-15 19:32:18.184400] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0ffc0, cid 4, qid 0 00:16:28.563 [2024-07-15 19:32:18.184475] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:28.564 [2024-07-15 19:32:18.184488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:28.564 [2024-07-15 19:32:18.184493] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.184497] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaccc00): datao=0, datal=4096, cccid=4 00:16:28.564 [2024-07-15 19:32:18.184503] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb0ffc0) on tqpair(0xaccc00): expected_datao=0, payload_size=4096 00:16:28.564 [2024-07-15 19:32:18.184508] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.184516] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.184521] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.184530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.564 [2024-07-15 19:32:18.184537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.564 [2024-07-15 19:32:18.184541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.184545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0ffc0) on tqpair=0xaccc00 00:16:28.564 [2024-07-15 19:32:18.184562] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:16:28.564 [2024-07-15 19:32:18.184573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:16:28.564 [2024-07-15 19:32:18.184585] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:16:28.564 [2024-07-15 19:32:18.184594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.184599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaccc00) 00:16:28.564 [2024-07-15 19:32:18.184607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.564 [2024-07-15 19:32:18.184630] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0ffc0, cid 4, qid 0 00:16:28.564 [2024-07-15 19:32:18.184714] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:28.564 [2024-07-15 19:32:18.184722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:28.564 [2024-07-15 19:32:18.184726] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.184730] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaccc00): datao=0, datal=4096, cccid=4 00:16:28.564 [2024-07-15 19:32:18.184735] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb0ffc0) on tqpair(0xaccc00): expected_datao=0, payload_size=4096 00:16:28.564 [2024-07-15 19:32:18.184740] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.184747] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.184752] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.184760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.564 [2024-07-15 19:32:18.184767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.564 [2024-07-15 19:32:18.184771] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.184775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0ffc0) on tqpair=0xaccc00 00:16:28.564 [2024-07-15 19:32:18.184791] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:28.564 [2024-07-15 19:32:18.184803] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:28.564 [2024-07-15 19:32:18.184813] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.184817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaccc00) 00:16:28.564 [2024-07-15 19:32:18.184825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.564 [2024-07-15 19:32:18.184847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0ffc0, cid 4, qid 0 00:16:28.564 [2024-07-15 19:32:18.184915] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:28.564 [2024-07-15 19:32:18.184922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:28.564 [2024-07-15 19:32:18.184926] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.184930] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaccc00): datao=0, datal=4096, cccid=4 00:16:28.564 [2024-07-15 19:32:18.184935] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb0ffc0) on tqpair(0xaccc00): expected_datao=0, payload_size=4096 00:16:28.564 [2024-07-15 19:32:18.184940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.184947] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.184952] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.184960] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.564 [2024-07-15 19:32:18.184967] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.564 [2024-07-15 19:32:18.184971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.184975] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0ffc0) on tqpair=0xaccc00 00:16:28.564 [2024-07-15 19:32:18.184984] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:28.564 [2024-07-15 19:32:18.184993] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:16:28.564 [2024-07-15 19:32:18.185004] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:16:28.564 [2024-07-15 19:32:18.185011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:28.564 [2024-07-15 19:32:18.185016] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:28.564 [2024-07-15 19:32:18.185022] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:16:28.564 [2024-07-15 19:32:18.185028] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:16:28.564 [2024-07-15 19:32:18.185033] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:16:28.564 [2024-07-15 19:32:18.185039] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:16:28.564 [2024-07-15 19:32:18.185056] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.185061] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaccc00) 00:16:28.564 [2024-07-15 19:32:18.185069] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.564 [2024-07-15 19:32:18.185077] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.185081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.185085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xaccc00) 00:16:28.564 [2024-07-15 19:32:18.185092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.564 [2024-07-15 19:32:18.185119] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0ffc0, cid 4, qid 0 00:16:28.564 [2024-07-15 19:32:18.185127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb10140, cid 5, qid 0 00:16:28.564 [2024-07-15 19:32:18.185201] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.564 [2024-07-15 19:32:18.185209] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.564 [2024-07-15 19:32:18.185213] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.185217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0ffc0) on tqpair=0xaccc00 00:16:28.564 [2024-07-15 19:32:18.185225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.564 [2024-07-15 19:32:18.185231] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.564 [2024-07-15 19:32:18.185235] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.185239] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb10140) on tqpair=0xaccc00 00:16:28.564 [2024-07-15 19:32:18.185250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.185255] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xaccc00) 00:16:28.564 [2024-07-15 19:32:18.185262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.564 [2024-07-15 19:32:18.185282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb10140, cid 5, qid 0 00:16:28.564 [2024-07-15 19:32:18.185343] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.564 [2024-07-15 19:32:18.185351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.564 [2024-07-15 19:32:18.185355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.185376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb10140) on tqpair=0xaccc00 00:16:28.564 [2024-07-15 19:32:18.185390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.185395] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xaccc00) 00:16:28.564 [2024-07-15 19:32:18.185403] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.564 [2024-07-15 19:32:18.185426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb10140, cid 5, qid 0 00:16:28.564 [2024-07-15 19:32:18.185492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.564 [2024-07-15 19:32:18.185500] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.564 [2024-07-15 19:32:18.185504] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.185508] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb10140) on tqpair=0xaccc00 00:16:28.564 [2024-07-15 19:32:18.185519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.185524] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xaccc00) 00:16:28.564 [2024-07-15 19:32:18.185532] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.564 [2024-07-15 19:32:18.185551] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb10140, cid 5, qid 0 00:16:28.564 [2024-07-15 19:32:18.185609] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.564 [2024-07-15 19:32:18.185616] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.564 [2024-07-15 19:32:18.185620] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.185624] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb10140) on tqpair=0xaccc00 00:16:28.564 [2024-07-15 19:32:18.185644] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.185650] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xaccc00) 00:16:28.564 [2024-07-15 19:32:18.185658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.564 [2024-07-15 19:32:18.185666] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.185671] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaccc00) 00:16:28.564 [2024-07-15 19:32:18.185677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.564 [2024-07-15 19:32:18.185685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.564 [2024-07-15 19:32:18.185690] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xaccc00) 00:16:28.564 [2024-07-15 19:32:18.185697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.564 [2024-07-15 19:32:18.185708] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.565 [2024-07-15 19:32:18.185713] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xaccc00) 00:16:28.565 [2024-07-15 19:32:18.185720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.565 [2024-07-15 19:32:18.185742] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb10140, cid 5, qid 0 00:16:28.565 [2024-07-15 19:32:18.185750] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0ffc0, cid 4, qid 0 00:16:28.565 [2024-07-15 19:32:18.185755] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb102c0, cid 6, qid 0 00:16:28.565 [2024-07-15 19:32:18.185760] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb10440, cid 7, qid 0 00:16:28.565 [2024-07-15 19:32:18.185902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:28.565 [2024-07-15 19:32:18.185909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:28.565 [2024-07-15 19:32:18.185913] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:28.565 [2024-07-15 19:32:18.185917] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaccc00): datao=0, datal=8192, cccid=5 00:16:28.565 [2024-07-15 19:32:18.185923] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb10140) on tqpair(0xaccc00): expected_datao=0, payload_size=8192 00:16:28.565 [2024-07-15 19:32:18.185927] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.565 [2024-07-15 19:32:18.185945] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:28.565 [2024-07-15 19:32:18.185950] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:28.565 [2024-07-15 19:32:18.185957] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:28.565 [2024-07-15 19:32:18.185963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:28.565 [2024-07-15 19:32:18.185967] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:28.565 [2024-07-15 19:32:18.185971] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaccc00): datao=0, datal=512, cccid=4 00:16:28.565 [2024-07-15 19:32:18.185976] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb0ffc0) on tqpair(0xaccc00): expected_datao=0, payload_size=512 00:16:28.565 [2024-07-15 19:32:18.185980] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.565 [2024-07-15 19:32:18.185987] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:28.565 [2024-07-15 19:32:18.185991] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:28.565 [2024-07-15 19:32:18.185997] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:28.565 [2024-07-15 19:32:18.186003] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:28.565 [2024-07-15 19:32:18.186007] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:28.565 [2024-07-15 19:32:18.186011] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaccc00): datao=0, datal=512, cccid=6 00:16:28.565 [2024-07-15 19:32:18.186015] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb102c0) on tqpair(0xaccc00): expected_datao=0, payload_size=512 00:16:28.565 [2024-07-15 19:32:18.186020] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.565 [2024-07-15 19:32:18.186027] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:28.565 [2024-07-15 19:32:18.186031] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:28.565 [2024-07-15 19:32:18.186037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:28.565 [2024-07-15 19:32:18.186043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:28.565 [2024-07-15 19:32:18.186047] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:28.565 [2024-07-15 19:32:18.186051] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaccc00): datao=0, datal=4096, cccid=7 00:16:28.565 [2024-07-15 19:32:18.186055] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb10440) on tqpair(0xaccc00): expected_datao=0, payload_size=4096 00:16:28.565 [2024-07-15 19:32:18.186060] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.565 [2024-07-15 19:32:18.186067] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:28.565 [2024-07-15 19:32:18.186071] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:28.565 [2024-07-15 19:32:18.186080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.565 [2024-07-15 19:32:18.186086] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.565 [2024-07-15 19:32:18.186090] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.565 [2024-07-15 19:32:18.186094] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb10140) on tqpair=0xaccc00 00:16:28.565 [2024-07-15 19:32:18.186112] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.565 [2024-07-15 19:32:18.186120] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.565 [2024-07-15 19:32:18.186124] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.565 [2024-07-15 19:32:18.186128] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0ffc0) on tqpair=0xaccc00 00:16:28.565 [2024-07-15 19:32:18.186140] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.565 [2024-07-15 19:32:18.186147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.565 ===================================================== 00:16:28.565 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:28.565 ===================================================== 00:16:28.565 Controller Capabilities/Features 00:16:28.565 ================================ 00:16:28.565 Vendor ID: 8086 00:16:28.565 Subsystem Vendor ID: 8086 00:16:28.565 Serial Number: SPDK00000000000001 00:16:28.565 Model Number: SPDK bdev Controller 00:16:28.565 Firmware Version: 24.09 00:16:28.565 Recommended Arb Burst: 6 00:16:28.565 IEEE OUI Identifier: e4 d2 5c 00:16:28.565 Multi-path I/O 00:16:28.565 May have multiple subsystem ports: Yes 00:16:28.565 May have multiple controllers: Yes 00:16:28.565 Associated with SR-IOV VF: No 00:16:28.565 Max Data Transfer Size: 131072 00:16:28.565 Max Number of Namespaces: 32 00:16:28.565 Max Number of I/O Queues: 127 00:16:28.565 NVMe Specification Version (VS): 1.3 00:16:28.565 NVMe Specification Version (Identify): 1.3 00:16:28.565 Maximum Queue Entries: 128 00:16:28.565 Contiguous Queues Required: Yes 00:16:28.565 Arbitration Mechanisms Supported 00:16:28.565 Weighted Round Robin: Not Supported 00:16:28.565 Vendor Specific: Not Supported 00:16:28.565 Reset Timeout: 15000 ms 00:16:28.565 Doorbell Stride: 4 bytes 00:16:28.565 NVM Subsystem Reset: Not Supported 00:16:28.565 Command Sets Supported 00:16:28.565 NVM Command Set: Supported 00:16:28.565 Boot Partition: Not Supported 00:16:28.565 Memory Page Size Minimum: 4096 bytes 00:16:28.565 Memory Page Size Maximum: 4096 bytes 00:16:28.565 Persistent Memory Region: Not Supported 00:16:28.565 Optional Asynchronous Events Supported 00:16:28.565 Namespace Attribute Notices: Supported 00:16:28.565 Firmware Activation Notices: Not Supported 00:16:28.565 ANA Change Notices: Not Supported 00:16:28.565 PLE Aggregate Log Change Notices: Not Supported 00:16:28.565 LBA Status Info Alert Notices: Not Supported 00:16:28.565 EGE Aggregate Log Change Notices: Not Supported 00:16:28.565 Normal NVM Subsystem Shutdown event: Not Supported 00:16:28.565 Zone Descriptor Change Notices: Not Supported 00:16:28.565 Discovery Log Change Notices: Not Supported 00:16:28.565 Controller Attributes 00:16:28.565 128-bit Host Identifier: Supported 00:16:28.565 Non-Operational Permissive Mode: Not Supported 00:16:28.565 NVM Sets: Not Supported 00:16:28.565 Read Recovery Levels: Not Supported 00:16:28.565 Endurance Groups: Not Supported 00:16:28.565 Predictable Latency Mode: Not Supported 00:16:28.565 Traffic Based Keep ALive: Not Supported 00:16:28.565 Namespace Granularity: Not Supported 00:16:28.565 SQ Associations: Not Supported 00:16:28.565 UUID List: Not Supported 00:16:28.565 Multi-Domain Subsystem: Not Supported 00:16:28.565 Fixed Capacity Management: Not Supported 00:16:28.565 Variable Capacity Management: Not Supported 00:16:28.565 Delete Endurance Group: Not Supported 00:16:28.565 Delete NVM Set: Not Supported 00:16:28.565 Extended LBA Formats Supported: Not Supported 00:16:28.565 Flexible Data Placement Supported: Not Supported 00:16:28.565 00:16:28.565 Controller Memory Buffer Support 00:16:28.565 ================================ 00:16:28.565 Supported: No 00:16:28.565 00:16:28.565 Persistent Memory Region Support 00:16:28.565 ================================ 00:16:28.565 Supported: No 00:16:28.565 00:16:28.565 Admin Command Set Attributes 00:16:28.565 ============================ 00:16:28.565 Security Send/Receive: Not Supported 00:16:28.565 Format NVM: Not Supported 00:16:28.565 Firmware Activate/Download: Not Supported 00:16:28.565 Namespace Management: Not Supported 00:16:28.565 Device Self-Test: Not Supported 00:16:28.565 Directives: Not Supported 00:16:28.565 NVMe-MI: Not Supported 00:16:28.565 Virtualization Management: Not Supported 00:16:28.565 Doorbell Buffer Config: Not Supported 00:16:28.565 Get LBA Status Capability: Not Supported 00:16:28.565 Command & Feature Lockdown Capability: Not Supported 00:16:28.565 Abort Command Limit: 4 00:16:28.565 Async Event Request Limit: 4 00:16:28.565 Number of Firmware Slots: N/A 00:16:28.565 Firmware Slot 1 Read-Only: N/A 00:16:28.565 Firmware Activation Without Reset: N/A 00:16:28.565 Multiple Update Detection Support: N/A 00:16:28.565 Firmware Update Granularity: No Information Provided 00:16:28.565 Per-Namespace SMART Log: No 00:16:28.565 Asymmetric Namespace Access Log Page: Not Supported 00:16:28.565 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:28.565 Command Effects Log Page: Supported 00:16:28.565 Get Log Page Extended Data: Supported 00:16:28.565 Telemetry Log Pages: Not Supported 00:16:28.565 Persistent Event Log Pages: Not Supported 00:16:28.565 Supported Log Pages Log Page: May Support 00:16:28.565 Commands Supported & Effects Log Page: Not Supported 00:16:28.565 Feature Identifiers & Effects Log Page:May Support 00:16:28.565 NVMe-MI Commands & Effects Log Page: May Support 00:16:28.565 Data Area 4 for Telemetry Log: Not Supported 00:16:28.565 Error Log Page Entries Supported: 128 00:16:28.565 Keep Alive: Supported 00:16:28.565 Keep Alive Granularity: 10000 ms 00:16:28.565 00:16:28.565 NVM Command Set Attributes 00:16:28.565 ========================== 00:16:28.565 Submission Queue Entry Size 00:16:28.565 Max: 64 00:16:28.565 Min: 64 00:16:28.565 Completion Queue Entry Size 00:16:28.565 Max: 16 00:16:28.565 Min: 16 00:16:28.565 Number of Namespaces: 32 00:16:28.565 Compare Command: Supported 00:16:28.565 Write Uncorrectable Command: Not Supported 00:16:28.565 Dataset Management Command: Supported 00:16:28.565 Write Zeroes Command: Supported 00:16:28.565 Set Features Save Field: Not Supported 00:16:28.566 Reservations: Supported 00:16:28.566 Timestamp: Not Supported 00:16:28.566 Copy: Supported 00:16:28.566 Volatile Write Cache: Present 00:16:28.566 Atomic Write Unit (Normal): 1 00:16:28.566 Atomic Write Unit (PFail): 1 00:16:28.566 Atomic Compare & Write Unit: 1 00:16:28.566 Fused Compare & Write: Supported 00:16:28.566 Scatter-Gather List 00:16:28.566 SGL Command Set: Supported 00:16:28.566 SGL Keyed: Supported 00:16:28.566 SGL Bit Bucket Descriptor: Not Supported 00:16:28.566 SGL Metadata Pointer: Not Supported 00:16:28.566 Oversized SGL: Not Supported 00:16:28.566 SGL Metadata Address: Not Supported 00:16:28.566 SGL Offset: Supported 00:16:28.566 Transport SGL Data Block: Not Supported 00:16:28.566 Replay Protected Memory Block: Not Supported 00:16:28.566 00:16:28.566 Firmware Slot Information 00:16:28.566 ========================= 00:16:28.566 Active slot: 1 00:16:28.566 Slot 1 Firmware Revision: 24.09 00:16:28.566 00:16:28.566 00:16:28.566 Commands Supported and Effects 00:16:28.566 ============================== 00:16:28.566 Admin Commands 00:16:28.566 -------------- 00:16:28.566 Get Log Page (02h): Supported 00:16:28.566 Identify (06h): Supported 00:16:28.566 Abort (08h): Supported 00:16:28.566 Set Features (09h): Supported 00:16:28.566 Get Features (0Ah): Supported 00:16:28.566 Asynchronous Event Request (0Ch): Supported 00:16:28.566 Keep Alive (18h): Supported 00:16:28.566 I/O Commands 00:16:28.566 ------------ 00:16:28.566 Flush (00h): Supported LBA-Change 00:16:28.566 Write (01h): Supported LBA-Change 00:16:28.566 Read (02h): Supported 00:16:28.566 Compare (05h): Supported 00:16:28.566 Write Zeroes (08h): Supported LBA-Change 00:16:28.566 Dataset Management (09h): Supported LBA-Change 00:16:28.566 Copy (19h): Supported LBA-Change 00:16:28.566 00:16:28.566 Error Log 00:16:28.566 ========= 00:16:28.566 00:16:28.566 Arbitration 00:16:28.566 =========== 00:16:28.566 Arbitration Burst: 1 00:16:28.566 00:16:28.566 Power Management 00:16:28.566 ================ 00:16:28.566 Number of Power States: 1 00:16:28.566 Current Power State: Power State #0 00:16:28.566 Power State #0: 00:16:28.566 Max Power: 0.00 W 00:16:28.566 Non-Operational State: Operational 00:16:28.566 Entry Latency: Not Reported 00:16:28.566 Exit Latency: Not Reported 00:16:28.566 Relative Read Throughput: 0 00:16:28.566 Relative Read Latency: 0 00:16:28.566 Relative Write Throughput: 0 00:16:28.566 Relative Write Latency: 0 00:16:28.566 Idle Power: Not Reported 00:16:28.566 Active Power: Not Reported 00:16:28.566 Non-Operational Permissive Mode: Not Supported 00:16:28.566 00:16:28.566 Health Information 00:16:28.566 ================== 00:16:28.566 Critical Warnings: 00:16:28.566 Available Spare Space: OK 00:16:28.566 Temperature: OK 00:16:28.566 Device Reliability: OK 00:16:28.566 Read Only: No 00:16:28.566 Volatile Memory Backup: OK 00:16:28.566 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:28.566 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:28.566 Available Spare: 0% 00:16:28.566 Available Spare Threshold: 0% 00:16:28.566 Life Percentage Used:[2024-07-15 19:32:18.186151] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.566 [2024-07-15 19:32:18.186155] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb102c0) on tqpair=0xaccc00 00:16:28.566 [2024-07-15 19:32:18.186163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.566 [2024-07-15 19:32:18.186169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.566 [2024-07-15 19:32:18.186173] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.566 [2024-07-15 19:32:18.186177] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb10440) on tqpair=0xaccc00 00:16:28.566 [2024-07-15 19:32:18.186299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.566 [2024-07-15 19:32:18.186307] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xaccc00) 00:16:28.566 [2024-07-15 19:32:18.186316] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.566 [2024-07-15 19:32:18.186342] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb10440, cid 7, qid 0 00:16:28.566 [2024-07-15 19:32:18.190374] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.566 [2024-07-15 19:32:18.190394] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.566 [2024-07-15 19:32:18.190400] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.566 [2024-07-15 19:32:18.190405] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb10440) on tqpair=0xaccc00 00:16:28.566 [2024-07-15 19:32:18.190450] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:16:28.566 [2024-07-15 19:32:18.190463] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0f9c0) on tqpair=0xaccc00 00:16:28.566 [2024-07-15 19:32:18.190471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.566 [2024-07-15 19:32:18.190478] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fb40) on tqpair=0xaccc00 00:16:28.566 [2024-07-15 19:32:18.190483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.566 [2024-07-15 19:32:18.190488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fcc0) on tqpair=0xaccc00 00:16:28.566 [2024-07-15 19:32:18.190494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.566 [2024-07-15 19:32:18.190499] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.566 [2024-07-15 19:32:18.190504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.566 [2024-07-15 19:32:18.190515] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.566 [2024-07-15 19:32:18.190520] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.566 [2024-07-15 19:32:18.190525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.566 [2024-07-15 19:32:18.190534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.566 [2024-07-15 19:32:18.190563] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.566 [2024-07-15 19:32:18.190621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.566 [2024-07-15 19:32:18.190629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.566 [2024-07-15 19:32:18.190633] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.566 [2024-07-15 19:32:18.190643] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.566 [2024-07-15 19:32:18.190651] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.566 [2024-07-15 19:32:18.190656] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.566 [2024-07-15 19:32:18.190660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.566 [2024-07-15 19:32:18.190668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.566 [2024-07-15 19:32:18.190692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.566 [2024-07-15 19:32:18.190769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.566 [2024-07-15 19:32:18.190776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.566 [2024-07-15 19:32:18.190780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.566 [2024-07-15 19:32:18.190784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.566 [2024-07-15 19:32:18.190789] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:16:28.566 [2024-07-15 19:32:18.190794] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:16:28.566 [2024-07-15 19:32:18.190805] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.566 [2024-07-15 19:32:18.190811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.566 [2024-07-15 19:32:18.190815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.566 [2024-07-15 19:32:18.190823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.566 [2024-07-15 19:32:18.190842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.566 [2024-07-15 19:32:18.190901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.566 [2024-07-15 19:32:18.190908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.566 [2024-07-15 19:32:18.190912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.566 [2024-07-15 19:32:18.190916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.566 [2024-07-15 19:32:18.190928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.190933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.190937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.567 [2024-07-15 19:32:18.190945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.567 [2024-07-15 19:32:18.190964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.567 [2024-07-15 19:32:18.191018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.567 [2024-07-15 19:32:18.191026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.567 [2024-07-15 19:32:18.191029] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191034] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.567 [2024-07-15 19:32:18.191045] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191050] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191054] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.567 [2024-07-15 19:32:18.191062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.567 [2024-07-15 19:32:18.191081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.567 [2024-07-15 19:32:18.191133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.567 [2024-07-15 19:32:18.191140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.567 [2024-07-15 19:32:18.191144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.567 [2024-07-15 19:32:18.191160] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191165] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191169] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.567 [2024-07-15 19:32:18.191177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.567 [2024-07-15 19:32:18.191196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.567 [2024-07-15 19:32:18.191248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.567 [2024-07-15 19:32:18.191255] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.567 [2024-07-15 19:32:18.191259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191264] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.567 [2024-07-15 19:32:18.191275] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191284] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.567 [2024-07-15 19:32:18.191292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.567 [2024-07-15 19:32:18.191311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.567 [2024-07-15 19:32:18.191387] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.567 [2024-07-15 19:32:18.191396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.567 [2024-07-15 19:32:18.191400] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191405] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.567 [2024-07-15 19:32:18.191416] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191422] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191426] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.567 [2024-07-15 19:32:18.191433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.567 [2024-07-15 19:32:18.191456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.567 [2024-07-15 19:32:18.191516] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.567 [2024-07-15 19:32:18.191523] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.567 [2024-07-15 19:32:18.191527] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191532] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.567 [2024-07-15 19:32:18.191543] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191548] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191552] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.567 [2024-07-15 19:32:18.191560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.567 [2024-07-15 19:32:18.191579] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.567 [2024-07-15 19:32:18.191636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.567 [2024-07-15 19:32:18.191643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.567 [2024-07-15 19:32:18.191647] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191651] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.567 [2024-07-15 19:32:18.191662] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191667] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191671] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.567 [2024-07-15 19:32:18.191679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.567 [2024-07-15 19:32:18.191698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.567 [2024-07-15 19:32:18.191752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.567 [2024-07-15 19:32:18.191759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.567 [2024-07-15 19:32:18.191763] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.567 [2024-07-15 19:32:18.191778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191783] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.567 [2024-07-15 19:32:18.191795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.567 [2024-07-15 19:32:18.191814] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.567 [2024-07-15 19:32:18.191871] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.567 [2024-07-15 19:32:18.191879] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.567 [2024-07-15 19:32:18.191883] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191887] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.567 [2024-07-15 19:32:18.191898] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191903] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.191908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.567 [2024-07-15 19:32:18.191915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.567 [2024-07-15 19:32:18.191934] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.567 [2024-07-15 19:32:18.191992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.567 [2024-07-15 19:32:18.191999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.567 [2024-07-15 19:32:18.192003] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.192007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.567 [2024-07-15 19:32:18.192018] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.192024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.192028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.567 [2024-07-15 19:32:18.192036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.567 [2024-07-15 19:32:18.192055] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.567 [2024-07-15 19:32:18.192112] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.567 [2024-07-15 19:32:18.192119] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.567 [2024-07-15 19:32:18.192123] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.192127] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.567 [2024-07-15 19:32:18.192139] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.192144] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.192148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.567 [2024-07-15 19:32:18.192155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.567 [2024-07-15 19:32:18.192175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.567 [2024-07-15 19:32:18.192233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.567 [2024-07-15 19:32:18.192241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.567 [2024-07-15 19:32:18.192244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.192249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.567 [2024-07-15 19:32:18.192260] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.192265] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.192269] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.567 [2024-07-15 19:32:18.192277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.567 [2024-07-15 19:32:18.192296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.567 [2024-07-15 19:32:18.192353] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.567 [2024-07-15 19:32:18.192388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.567 [2024-07-15 19:32:18.192394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.192398] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.567 [2024-07-15 19:32:18.192412] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.192417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.567 [2024-07-15 19:32:18.192421] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.567 [2024-07-15 19:32:18.192429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.567 [2024-07-15 19:32:18.192453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.567 [2024-07-15 19:32:18.192514] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.567 [2024-07-15 19:32:18.192521] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.567 [2024-07-15 19:32:18.192526] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.192530] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.568 [2024-07-15 19:32:18.192541] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.192547] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.192551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.568 [2024-07-15 19:32:18.192558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.568 [2024-07-15 19:32:18.192578] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.568 [2024-07-15 19:32:18.192632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.568 [2024-07-15 19:32:18.192639] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.568 [2024-07-15 19:32:18.192643] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.192648] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.568 [2024-07-15 19:32:18.192659] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.192664] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.192668] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.568 [2024-07-15 19:32:18.192675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.568 [2024-07-15 19:32:18.192695] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.568 [2024-07-15 19:32:18.192751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.568 [2024-07-15 19:32:18.192759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.568 [2024-07-15 19:32:18.192763] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.192767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.568 [2024-07-15 19:32:18.192778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.192783] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.192787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.568 [2024-07-15 19:32:18.192795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.568 [2024-07-15 19:32:18.192815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.568 [2024-07-15 19:32:18.192871] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.568 [2024-07-15 19:32:18.192878] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.568 [2024-07-15 19:32:18.192882] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.192887] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.568 [2024-07-15 19:32:18.192898] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.192903] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.192907] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.568 [2024-07-15 19:32:18.192915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.568 [2024-07-15 19:32:18.192935] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.568 [2024-07-15 19:32:18.192990] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.568 [2024-07-15 19:32:18.192997] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.568 [2024-07-15 19:32:18.193001] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193005] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.568 [2024-07-15 19:32:18.193017] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.568 [2024-07-15 19:32:18.193034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.568 [2024-07-15 19:32:18.193053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.568 [2024-07-15 19:32:18.193107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.568 [2024-07-15 19:32:18.193114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.568 [2024-07-15 19:32:18.193118] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193122] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.568 [2024-07-15 19:32:18.193133] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193138] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193142] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.568 [2024-07-15 19:32:18.193150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.568 [2024-07-15 19:32:18.193170] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.568 [2024-07-15 19:32:18.193227] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.568 [2024-07-15 19:32:18.193240] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.568 [2024-07-15 19:32:18.193245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.568 [2024-07-15 19:32:18.193262] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193267] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193271] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.568 [2024-07-15 19:32:18.193279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.568 [2024-07-15 19:32:18.193301] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.568 [2024-07-15 19:32:18.193371] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.568 [2024-07-15 19:32:18.193380] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.568 [2024-07-15 19:32:18.193384] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193389] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.568 [2024-07-15 19:32:18.193401] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.568 [2024-07-15 19:32:18.193418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.568 [2024-07-15 19:32:18.193440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.568 [2024-07-15 19:32:18.193501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.568 [2024-07-15 19:32:18.193509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.568 [2024-07-15 19:32:18.193513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193517] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.568 [2024-07-15 19:32:18.193528] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193538] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.568 [2024-07-15 19:32:18.193545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.568 [2024-07-15 19:32:18.193565] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.568 [2024-07-15 19:32:18.193622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.568 [2024-07-15 19:32:18.193629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.568 [2024-07-15 19:32:18.193633] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193637] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.568 [2024-07-15 19:32:18.193649] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193658] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.568 [2024-07-15 19:32:18.193665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.568 [2024-07-15 19:32:18.193684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.568 [2024-07-15 19:32:18.193744] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.568 [2024-07-15 19:32:18.193751] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.568 [2024-07-15 19:32:18.193755] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.568 [2024-07-15 19:32:18.193771] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193776] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.568 [2024-07-15 19:32:18.193788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.568 [2024-07-15 19:32:18.193807] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.568 [2024-07-15 19:32:18.193861] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.568 [2024-07-15 19:32:18.193868] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.568 [2024-07-15 19:32:18.193872] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193877] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.568 [2024-07-15 19:32:18.193888] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.568 [2024-07-15 19:32:18.193905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.568 [2024-07-15 19:32:18.193924] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.568 [2024-07-15 19:32:18.193979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.568 [2024-07-15 19:32:18.193986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.568 [2024-07-15 19:32:18.193990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.193994] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.568 [2024-07-15 19:32:18.194005] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.194010] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.568 [2024-07-15 19:32:18.194014] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.568 [2024-07-15 19:32:18.194022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.568 [2024-07-15 19:32:18.194041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.568 [2024-07-15 19:32:18.194107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.568 [2024-07-15 19:32:18.194114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.569 [2024-07-15 19:32:18.194118] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.569 [2024-07-15 19:32:18.194122] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.569 [2024-07-15 19:32:18.194133] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.569 [2024-07-15 19:32:18.194138] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.569 [2024-07-15 19:32:18.194142] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.569 [2024-07-15 19:32:18.194150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.569 [2024-07-15 19:32:18.194169] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.569 [2024-07-15 19:32:18.194223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.569 [2024-07-15 19:32:18.194243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.569 [2024-07-15 19:32:18.194249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.569 [2024-07-15 19:32:18.194253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.569 [2024-07-15 19:32:18.194266] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.569 [2024-07-15 19:32:18.194271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.569 [2024-07-15 19:32:18.194275] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.569 [2024-07-15 19:32:18.194283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.569 [2024-07-15 19:32:18.194305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.569 [2024-07-15 19:32:18.198383] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.569 [2024-07-15 19:32:18.198403] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.569 [2024-07-15 19:32:18.198409] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.569 [2024-07-15 19:32:18.198413] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.569 [2024-07-15 19:32:18.198428] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:28.569 [2024-07-15 19:32:18.198433] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:28.569 [2024-07-15 19:32:18.198437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaccc00) 00:16:28.569 [2024-07-15 19:32:18.198446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.569 [2024-07-15 19:32:18.198473] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb0fe40, cid 3, qid 0 00:16:28.569 [2024-07-15 19:32:18.198535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:28.569 [2024-07-15 19:32:18.198543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:28.569 [2024-07-15 19:32:18.198547] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:28.569 [2024-07-15 19:32:18.198551] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb0fe40) on tqpair=0xaccc00 00:16:28.569 [2024-07-15 19:32:18.198560] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:16:28.569 0% 00:16:28.569 Data Units Read: 0 00:16:28.569 Data Units Written: 0 00:16:28.569 Host Read Commands: 0 00:16:28.569 Host Write Commands: 0 00:16:28.569 Controller Busy Time: 0 minutes 00:16:28.569 Power Cycles: 0 00:16:28.569 Power On Hours: 0 hours 00:16:28.569 Unsafe Shutdowns: 0 00:16:28.569 Unrecoverable Media Errors: 0 00:16:28.569 Lifetime Error Log Entries: 0 00:16:28.569 Warning Temperature Time: 0 minutes 00:16:28.569 Critical Temperature Time: 0 minutes 00:16:28.569 00:16:28.569 Number of Queues 00:16:28.569 ================ 00:16:28.569 Number of I/O Submission Queues: 127 00:16:28.569 Number of I/O Completion Queues: 127 00:16:28.569 00:16:28.569 Active Namespaces 00:16:28.569 ================= 00:16:28.569 Namespace ID:1 00:16:28.569 Error Recovery Timeout: Unlimited 00:16:28.569 Command Set Identifier: NVM (00h) 00:16:28.569 Deallocate: Supported 00:16:28.569 Deallocated/Unwritten Error: Not Supported 00:16:28.569 Deallocated Read Value: Unknown 00:16:28.569 Deallocate in Write Zeroes: Not Supported 00:16:28.569 Deallocated Guard Field: 0xFFFF 00:16:28.569 Flush: Supported 00:16:28.569 Reservation: Supported 00:16:28.569 Namespace Sharing Capabilities: Multiple Controllers 00:16:28.569 Size (in LBAs): 131072 (0GiB) 00:16:28.569 Capacity (in LBAs): 131072 (0GiB) 00:16:28.569 Utilization (in LBAs): 131072 (0GiB) 00:16:28.569 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:28.569 EUI64: ABCDEF0123456789 00:16:28.569 UUID: aa95ad36-b7fe-45b6-81f9-c3df619aeeb3 00:16:28.569 Thin Provisioning: Not Supported 00:16:28.569 Per-NS Atomic Units: Yes 00:16:28.569 Atomic Boundary Size (Normal): 0 00:16:28.569 Atomic Boundary Size (PFail): 0 00:16:28.569 Atomic Boundary Offset: 0 00:16:28.569 Maximum Single Source Range Length: 65535 00:16:28.569 Maximum Copy Length: 65535 00:16:28.569 Maximum Source Range Count: 1 00:16:28.569 NGUID/EUI64 Never Reused: No 00:16:28.569 Namespace Write Protected: No 00:16:28.569 Number of LBA Formats: 1 00:16:28.569 Current LBA Format: LBA Format #00 00:16:28.569 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:28.569 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:28.569 rmmod nvme_tcp 00:16:28.569 rmmod nvme_fabrics 00:16:28.569 rmmod nvme_keyring 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 86726 ']' 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 86726 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 86726 ']' 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 86726 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86726 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:28.569 killing process with pid 86726 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86726' 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 86726 00:16:28.569 19:32:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 86726 00:16:28.828 19:32:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:28.828 19:32:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:28.828 19:32:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:28.828 19:32:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:28.828 19:32:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:28.828 19:32:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.828 19:32:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.828 19:32:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.828 19:32:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:28.828 00:16:28.828 real 0m2.533s 00:16:28.828 user 0m7.301s 00:16:28.828 sys 0m0.594s 00:16:28.828 19:32:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:28.828 19:32:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:28.828 ************************************ 00:16:28.828 END TEST nvmf_identify 00:16:28.828 ************************************ 00:16:28.828 19:32:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:28.828 19:32:18 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:28.828 19:32:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:28.828 19:32:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:28.828 19:32:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:28.828 ************************************ 00:16:28.828 START TEST nvmf_perf 00:16:28.828 ************************************ 00:16:28.828 19:32:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:29.087 * Looking for test storage... 00:16:29.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:29.087 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:29.088 Cannot find device "nvmf_tgt_br" 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:29.088 Cannot find device "nvmf_tgt_br2" 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:29.088 Cannot find device "nvmf_tgt_br" 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:29.088 Cannot find device "nvmf_tgt_br2" 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:29.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:29.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:29.088 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:29.347 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:29.347 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:29.347 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:29.347 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:29.347 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:29.347 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:29.347 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:29.347 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:29.347 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:29.347 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:29.347 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:29.347 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:29.347 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:29.347 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:29.347 19:32:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:29.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:29.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:16:29.347 00:16:29.347 --- 10.0.0.2 ping statistics --- 00:16:29.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.347 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:29.347 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:29.347 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:16:29.347 00:16:29.347 --- 10.0.0.3 ping statistics --- 00:16:29.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.347 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:29.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:29.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:29.347 00:16:29.347 --- 10.0.0.1 ping statistics --- 00:16:29.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.347 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=86947 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 86947 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 86947 ']' 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:29.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.347 19:32:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.348 19:32:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:29.348 19:32:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:29.348 [2024-07-15 19:32:19.130285] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:16:29.348 [2024-07-15 19:32:19.130399] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.606 [2024-07-15 19:32:19.271539] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:29.606 [2024-07-15 19:32:19.332521] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.606 [2024-07-15 19:32:19.332576] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.606 [2024-07-15 19:32:19.332589] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.606 [2024-07-15 19:32:19.332597] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.606 [2024-07-15 19:32:19.332604] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.606 [2024-07-15 19:32:19.333416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.606 [2024-07-15 19:32:19.333498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.606 [2024-07-15 19:32:19.333588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:29.606 [2024-07-15 19:32:19.333592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.541 19:32:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:30.541 19:32:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:16:30.541 19:32:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:30.541 19:32:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:30.541 19:32:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:30.541 19:32:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.541 19:32:20 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:30.541 19:32:20 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:16:31.106 19:32:20 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:16:31.106 19:32:20 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:31.106 19:32:20 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:16:31.106 19:32:20 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:31.670 19:32:21 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:31.670 19:32:21 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:16:31.670 19:32:21 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:31.670 19:32:21 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:16:31.670 19:32:21 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:31.670 [2024-07-15 19:32:21.391694] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.670 19:32:21 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:31.926 19:32:21 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:31.926 19:32:21 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:32.183 19:32:21 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:32.183 19:32:21 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:32.446 19:32:22 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:32.705 [2024-07-15 19:32:22.364949] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:32.705 19:32:22 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:32.961 19:32:22 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:32.961 19:32:22 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:32.961 19:32:22 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:32.961 19:32:22 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:34.330 Initializing NVMe Controllers 00:16:34.330 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:34.330 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:34.330 Initialization complete. Launching workers. 00:16:34.330 ======================================================== 00:16:34.330 Latency(us) 00:16:34.330 Device Information : IOPS MiB/s Average min max 00:16:34.330 PCIE (0000:00:10.0) NSID 1 from core 0: 24365.04 95.18 1313.41 276.99 7860.75 00:16:34.330 ======================================================== 00:16:34.330 Total : 24365.04 95.18 1313.41 276.99 7860.75 00:16:34.330 00:16:34.330 19:32:23 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:35.700 Initializing NVMe Controllers 00:16:35.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:35.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:35.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:35.700 Initialization complete. Launching workers. 00:16:35.700 ======================================================== 00:16:35.700 Latency(us) 00:16:35.700 Device Information : IOPS MiB/s Average min max 00:16:35.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3178.20 12.41 313.03 120.72 7108.28 00:16:35.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.50 0.48 8160.65 5016.71 12024.51 00:16:35.700 ======================================================== 00:16:35.700 Total : 3301.70 12.90 606.57 120.72 12024.51 00:16:35.700 00:16:35.700 19:32:25 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:37.084 Initializing NVMe Controllers 00:16:37.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:37.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:37.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:37.084 Initialization complete. Launching workers. 00:16:37.084 ======================================================== 00:16:37.084 Latency(us) 00:16:37.084 Device Information : IOPS MiB/s Average min max 00:16:37.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8495.24 33.18 3770.69 762.93 7577.16 00:16:37.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2715.76 10.61 11892.07 6218.09 20429.20 00:16:37.084 ======================================================== 00:16:37.084 Total : 11211.00 43.79 5738.01 762.93 20429.20 00:16:37.084 00:16:37.084 19:32:26 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:16:37.084 19:32:26 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:39.610 Initializing NVMe Controllers 00:16:39.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:39.610 Controller IO queue size 128, less than required. 00:16:39.610 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:39.610 Controller IO queue size 128, less than required. 00:16:39.610 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:39.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:39.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:39.610 Initialization complete. Launching workers. 00:16:39.610 ======================================================== 00:16:39.610 Latency(us) 00:16:39.610 Device Information : IOPS MiB/s Average min max 00:16:39.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1487.96 371.99 87542.61 55072.59 158269.92 00:16:39.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 516.49 129.12 272460.08 115963.41 512820.77 00:16:39.610 ======================================================== 00:16:39.610 Total : 2004.44 501.11 135190.34 55072.59 512820.77 00:16:39.610 00:16:39.610 19:32:29 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:16:39.868 Initializing NVMe Controllers 00:16:39.868 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:39.868 Controller IO queue size 128, less than required. 00:16:39.868 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:39.868 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:16:39.868 Controller IO queue size 128, less than required. 00:16:39.868 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:39.868 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:16:39.868 WARNING: Some requested NVMe devices were skipped 00:16:39.868 No valid NVMe controllers or AIO or URING devices found 00:16:39.868 19:32:29 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:16:42.395 Initializing NVMe Controllers 00:16:42.395 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:42.395 Controller IO queue size 128, less than required. 00:16:42.395 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:42.395 Controller IO queue size 128, less than required. 00:16:42.395 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:42.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:42.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:42.395 Initialization complete. Launching workers. 00:16:42.395 00:16:42.395 ==================== 00:16:42.395 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:16:42.395 TCP transport: 00:16:42.395 polls: 9941 00:16:42.395 idle_polls: 4314 00:16:42.395 sock_completions: 5627 00:16:42.395 nvme_completions: 3553 00:16:42.395 submitted_requests: 5334 00:16:42.395 queued_requests: 1 00:16:42.395 00:16:42.395 ==================== 00:16:42.395 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:16:42.395 TCP transport: 00:16:42.395 polls: 12174 00:16:42.395 idle_polls: 8075 00:16:42.395 sock_completions: 4099 00:16:42.395 nvme_completions: 7473 00:16:42.395 submitted_requests: 11196 00:16:42.395 queued_requests: 1 00:16:42.395 ======================================================== 00:16:42.395 Latency(us) 00:16:42.395 Device Information : IOPS MiB/s Average min max 00:16:42.395 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 886.88 221.72 148502.83 83997.41 252444.90 00:16:42.395 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1865.64 466.41 69290.46 26662.86 126462.35 00:16:42.395 ======================================================== 00:16:42.395 Total : 2752.52 688.13 94813.17 26662.86 252444.90 00:16:42.395 00:16:42.395 19:32:32 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:16:42.395 19:32:32 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:42.962 rmmod nvme_tcp 00:16:42.962 rmmod nvme_fabrics 00:16:42.962 rmmod nvme_keyring 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 86947 ']' 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 86947 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 86947 ']' 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 86947 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86947 00:16:42.962 killing process with pid 86947 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86947' 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 86947 00:16:42.962 19:32:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 86947 00:16:43.529 19:32:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:43.529 19:32:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:43.529 19:32:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:43.529 19:32:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.529 19:32:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.529 19:32:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.529 19:32:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.529 19:32:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.529 19:32:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:43.529 00:16:43.529 real 0m14.621s 00:16:43.529 user 0m53.793s 00:16:43.529 sys 0m3.556s 00:16:43.529 19:32:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:43.529 ************************************ 00:16:43.529 END TEST nvmf_perf 00:16:43.529 ************************************ 00:16:43.529 19:32:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:43.529 19:32:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:43.529 19:32:33 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:43.529 19:32:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:43.529 19:32:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:43.529 19:32:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:43.529 ************************************ 00:16:43.529 START TEST nvmf_fio_host 00:16:43.529 ************************************ 00:16:43.529 19:32:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:43.786 * Looking for test storage... 00:16:43.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:43.786 19:32:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:43.786 19:32:33 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.786 19:32:33 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.786 19:32:33 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.786 19:32:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.786 19:32:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.786 19:32:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.786 19:32:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:43.787 Cannot find device "nvmf_tgt_br" 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:43.787 Cannot find device "nvmf_tgt_br2" 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:43.787 Cannot find device "nvmf_tgt_br" 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:43.787 Cannot find device "nvmf_tgt_br2" 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:43.787 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:43.787 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:43.787 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:44.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:16:44.046 00:16:44.046 --- 10.0.0.2 ping statistics --- 00:16:44.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.046 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:44.046 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:44.046 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:16:44.046 00:16:44.046 --- 10.0.0.3 ping statistics --- 00:16:44.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.046 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:44.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:44.046 00:16:44.046 --- 10.0.0.1 ping statistics --- 00:16:44.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.046 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87429 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87429 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 87429 ']' 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.046 19:32:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.046 [2024-07-15 19:32:33.822030] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:16:44.046 [2024-07-15 19:32:33.822126] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.306 [2024-07-15 19:32:33.958046] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:44.306 [2024-07-15 19:32:34.018922] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.306 [2024-07-15 19:32:34.018973] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.306 [2024-07-15 19:32:34.018984] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.306 [2024-07-15 19:32:34.018993] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.306 [2024-07-15 19:32:34.019000] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.306 [2024-07-15 19:32:34.019166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.306 [2024-07-15 19:32:34.019278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.306 [2024-07-15 19:32:34.019402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:44.306 [2024-07-15 19:32:34.019403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.306 19:32:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:44.306 19:32:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:16:44.306 19:32:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:44.565 [2024-07-15 19:32:34.331391] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.823 19:32:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:16:44.823 19:32:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:44.823 19:32:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.823 19:32:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:45.081 Malloc1 00:16:45.081 19:32:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:45.340 19:32:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:45.600 19:32:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.858 [2024-07-15 19:32:35.582765] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.858 19:32:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:46.117 19:32:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:46.117 19:32:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:46.117 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:46.117 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:46.117 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:46.117 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:46.117 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:46.117 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:16:46.117 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:46.117 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:46.117 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:46.117 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:16:46.117 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:46.117 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:46.117 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:46.117 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:46.117 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:46.117 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:46.117 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:16:46.376 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:46.376 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:46.376 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:46.376 19:32:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:46.376 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:46.376 fio-3.35 00:16:46.376 Starting 1 thread 00:16:48.907 00:16:48.907 test: (groupid=0, jobs=1): err= 0: pid=87548: Mon Jul 15 19:32:38 2024 00:16:48.907 read: IOPS=8236, BW=32.2MiB/s (33.7MB/s)(64.6MiB/2007msec) 00:16:48.907 slat (usec): min=2, max=330, avg= 2.65, stdev= 3.31 00:16:48.907 clat (usec): min=3168, max=17993, avg=8106.89, stdev=1095.60 00:16:48.907 lat (usec): min=3214, max=17996, avg=8109.54, stdev=1095.37 00:16:48.907 clat percentiles (usec): 00:16:48.907 | 1.00th=[ 6521], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7308], 00:16:48.907 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 8029], 00:16:48.907 | 70.00th=[ 8356], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[ 9896], 00:16:48.907 | 99.00th=[11076], 99.50th=[13566], 99.90th=[16450], 99.95th=[16581], 00:16:48.907 | 99.99th=[17957] 00:16:48.907 bw ( KiB/s): min=30232, max=35448, per=99.94%, avg=32928.00, stdev=2425.17, samples=4 00:16:48.907 iops : min= 7558, max= 8862, avg=8232.00, stdev=606.29, samples=4 00:16:48.907 write: IOPS=8245, BW=32.2MiB/s (33.8MB/s)(64.6MiB/2007msec); 0 zone resets 00:16:48.907 slat (usec): min=2, max=246, avg= 2.79, stdev= 2.35 00:16:48.907 clat (usec): min=2386, max=16424, avg=7346.04, stdev=939.71 00:16:48.907 lat (usec): min=2401, max=16426, avg=7348.83, stdev=939.51 00:16:48.907 clat percentiles (usec): 00:16:48.907 | 1.00th=[ 5866], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6652], 00:16:48.907 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7308], 00:16:48.907 | 70.00th=[ 7570], 80.00th=[ 8029], 90.00th=[ 8586], 95.00th=[ 8979], 00:16:48.907 | 99.00th=[10028], 99.50th=[10945], 99.90th=[14615], 99.95th=[15008], 00:16:48.907 | 99.99th=[15270] 00:16:48.907 bw ( KiB/s): min=30152, max=35208, per=99.99%, avg=32978.00, stdev=2367.44, samples=4 00:16:48.907 iops : min= 7538, max= 8802, avg=8244.50, stdev=591.86, samples=4 00:16:48.907 lat (msec) : 4=0.09%, 10=97.10%, 20=2.81% 00:16:48.907 cpu : usr=68.20%, sys=22.63%, ctx=10, majf=0, minf=7 00:16:48.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:48.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:48.907 issued rwts: total=16531,16549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:48.907 00:16:48.907 Run status group 0 (all jobs): 00:16:48.907 READ: bw=32.2MiB/s (33.7MB/s), 32.2MiB/s-32.2MiB/s (33.7MB/s-33.7MB/s), io=64.6MiB (67.7MB), run=2007-2007msec 00:16:48.907 WRITE: bw=32.2MiB/s (33.8MB/s), 32.2MiB/s-32.2MiB/s (33.8MB/s-33.8MB/s), io=64.6MiB (67.8MB), run=2007-2007msec 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:48.907 19:32:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:48.907 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:48.907 fio-3.35 00:16:48.907 Starting 1 thread 00:16:51.433 00:16:51.433 test: (groupid=0, jobs=1): err= 0: pid=87591: Mon Jul 15 19:32:40 2024 00:16:51.433 read: IOPS=7746, BW=121MiB/s (127MB/s)(243MiB/2008msec) 00:16:51.433 slat (usec): min=3, max=131, avg= 4.10, stdev= 2.10 00:16:51.433 clat (usec): min=3118, max=20530, avg=9721.80, stdev=2423.17 00:16:51.433 lat (usec): min=3122, max=20535, avg=9725.89, stdev=2423.32 00:16:51.433 clat percentiles (usec): 00:16:51.433 | 1.00th=[ 5145], 5.00th=[ 5997], 10.00th=[ 6521], 20.00th=[ 7439], 00:16:51.433 | 30.00th=[ 8291], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10421], 00:16:51.433 | 70.00th=[11076], 80.00th=[11731], 90.00th=[12649], 95.00th=[13566], 00:16:51.433 | 99.00th=[16057], 99.50th=[17171], 99.90th=[19006], 99.95th=[19268], 00:16:51.433 | 99.99th=[19530] 00:16:51.433 bw ( KiB/s): min=56192, max=69504, per=50.97%, avg=63176.00, stdev=5821.09, samples=4 00:16:51.433 iops : min= 3512, max= 4344, avg=3948.50, stdev=363.82, samples=4 00:16:51.433 write: IOPS=4582, BW=71.6MiB/s (75.1MB/s)(129MiB/1802msec); 0 zone resets 00:16:51.433 slat (usec): min=36, max=216, avg=41.11, stdev= 6.32 00:16:51.433 clat (usec): min=4678, max=21278, avg=12001.08, stdev=2135.64 00:16:51.433 lat (usec): min=4716, max=21328, avg=12042.19, stdev=2136.80 00:16:51.433 clat percentiles (usec): 00:16:51.433 | 1.00th=[ 7832], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10290], 00:16:51.433 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11731], 60.00th=[12387], 00:16:51.433 | 70.00th=[13042], 80.00th=[13829], 90.00th=[14877], 95.00th=[15664], 00:16:51.433 | 99.00th=[17957], 99.50th=[18482], 99.90th=[21103], 99.95th=[21365], 00:16:51.433 | 99.99th=[21365] 00:16:51.433 bw ( KiB/s): min=56544, max=73344, per=89.59%, avg=65680.00, stdev=7374.66, samples=4 00:16:51.433 iops : min= 3534, max= 4584, avg=4105.00, stdev=460.92, samples=4 00:16:51.433 lat (msec) : 4=0.12%, 10=41.59%, 20=58.21%, 50=0.08% 00:16:51.433 cpu : usr=74.25%, sys=15.94%, ctx=550, majf=0, minf=18 00:16:51.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:16:51.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:51.433 issued rwts: total=15554,8257,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:51.433 00:16:51.433 Run status group 0 (all jobs): 00:16:51.433 READ: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=243MiB (255MB), run=2008-2008msec 00:16:51.433 WRITE: bw=71.6MiB/s (75.1MB/s), 71.6MiB/s-71.6MiB/s (75.1MB/s-75.1MB/s), io=129MiB (135MB), run=1802-1802msec 00:16:51.433 19:32:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:51.433 19:32:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:16:51.433 19:32:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:51.433 19:32:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:51.433 19:32:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:16:51.433 19:32:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:51.433 19:32:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:16:51.433 19:32:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:51.433 19:32:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:16:51.433 19:32:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:51.433 19:32:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:51.433 rmmod nvme_tcp 00:16:51.691 rmmod nvme_fabrics 00:16:51.691 rmmod nvme_keyring 00:16:51.691 19:32:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:51.692 19:32:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:16:51.692 19:32:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:16:51.692 19:32:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 87429 ']' 00:16:51.692 19:32:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 87429 00:16:51.692 19:32:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 87429 ']' 00:16:51.692 19:32:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 87429 00:16:51.692 19:32:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:16:51.692 19:32:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:51.692 19:32:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87429 00:16:51.692 19:32:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:51.692 killing process with pid 87429 00:16:51.692 19:32:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:51.692 19:32:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87429' 00:16:51.692 19:32:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 87429 00:16:51.692 19:32:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 87429 00:16:51.956 19:32:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:51.956 19:32:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:51.956 19:32:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:51.956 19:32:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:51.956 19:32:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:51.956 19:32:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.956 19:32:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.956 19:32:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.956 19:32:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:51.956 ************************************ 00:16:51.956 END TEST nvmf_fio_host 00:16:51.956 ************************************ 00:16:51.956 00:16:51.956 real 0m8.250s 00:16:51.956 user 0m34.135s 00:16:51.956 sys 0m2.106s 00:16:51.956 19:32:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:51.956 19:32:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.956 19:32:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:51.956 19:32:41 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:51.956 19:32:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:51.956 19:32:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:51.956 19:32:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:51.956 ************************************ 00:16:51.956 START TEST nvmf_failover 00:16:51.956 ************************************ 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:51.956 * Looking for test storage... 00:16:51.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:51.956 Cannot find device "nvmf_tgt_br" 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:51.956 Cannot find device "nvmf_tgt_br2" 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:16:51.956 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:52.216 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:52.216 Cannot find device "nvmf_tgt_br" 00:16:52.216 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:16:52.216 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:52.216 Cannot find device "nvmf_tgt_br2" 00:16:52.216 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:16:52.216 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:52.216 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:52.216 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:52.217 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:52.217 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:52.217 19:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:52.217 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:52.217 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:52.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:16:52.483 00:16:52.483 --- 10.0.0.2 ping statistics --- 00:16:52.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.483 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:52.483 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:52.483 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:16:52.483 00:16:52.483 --- 10.0.0.3 ping statistics --- 00:16:52.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.483 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:52.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:16:52.483 00:16:52.483 --- 10.0.0.1 ping statistics --- 00:16:52.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.483 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=87804 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 87804 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87804 ']' 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:52.483 19:32:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:52.483 [2024-07-15 19:32:42.133122] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:16:52.483 [2024-07-15 19:32:42.133216] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.483 [2024-07-15 19:32:42.269433] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:52.742 [2024-07-15 19:32:42.334974] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.742 [2024-07-15 19:32:42.335260] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.742 [2024-07-15 19:32:42.335423] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.742 [2024-07-15 19:32:42.335483] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.742 [2024-07-15 19:32:42.335642] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.742 [2024-07-15 19:32:42.336206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.742 [2024-07-15 19:32:42.336349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:52.742 [2024-07-15 19:32:42.336352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.742 19:32:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:52.742 19:32:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:16:52.742 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:52.742 19:32:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:52.742 19:32:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:52.742 19:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.742 19:32:42 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:53.011 [2024-07-15 19:32:42.737396] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.011 19:32:42 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:53.270 Malloc0 00:16:53.540 19:32:43 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:53.797 19:32:43 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:54.055 19:32:43 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.055 [2024-07-15 19:32:43.830271] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.055 19:32:43 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:54.313 [2024-07-15 19:32:44.110514] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:54.571 19:32:44 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:54.830 [2024-07-15 19:32:44.406742] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:54.830 19:32:44 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87908 00:16:54.830 19:32:44 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:54.830 19:32:44 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:54.830 19:32:44 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87908 /var/tmp/bdevperf.sock 00:16:54.830 19:32:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87908 ']' 00:16:54.830 19:32:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:54.830 19:32:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.830 19:32:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:54.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:54.830 19:32:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.830 19:32:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:55.765 19:32:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:55.765 19:32:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:16:55.765 19:32:45 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:56.331 NVMe0n1 00:16:56.331 19:32:45 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:56.589 00:16:56.589 19:32:46 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=87951 00:16:56.589 19:32:46 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:56.589 19:32:46 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:16:57.564 19:32:47 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.823 [2024-07-15 19:32:47.535807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.535860] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.535872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.535881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.535890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.535899] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.535908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.535916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.535925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.535933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.535941] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.535950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.535958] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.535966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.535974] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.535983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.535991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.535999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536007] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536040] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536067] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536106] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536205] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536230] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536238] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536271] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536287] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536295] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536303] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536311] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536346] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536417] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536425] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536442] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536534] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.823 [2024-07-15 19:32:47.536542] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536567] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536584] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536617] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536625] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536642] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536677] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536770] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536846] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536923] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536942] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 [2024-07-15 19:32:47.536951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddff0 is same with the state(5) to be set 00:16:57.824 19:32:47 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:17:01.105 19:32:50 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:01.362 00:17:01.362 19:32:50 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:01.620 [2024-07-15 19:32:51.243137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243232] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243257] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243285] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243293] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243335] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243352] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243375] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243384] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243393] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.620 [2024-07-15 19:32:51.243401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243418] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243426] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243434] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243443] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243461] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243469] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243494] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243560] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243568] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243593] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243653] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 [2024-07-15 19:32:51.243761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdef20 is same with the state(5) to be set 00:17:01.621 19:32:51 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:17:04.904 19:32:54 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.904 [2024-07-15 19:32:54.539249] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.904 19:32:54 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:17:05.840 19:32:55 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:06.098 19:32:55 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 87951 00:17:12.699 0 00:17:12.699 19:33:01 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 87908 00:17:12.699 19:33:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87908 ']' 00:17:12.699 19:33:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87908 00:17:12.699 19:33:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:12.699 19:33:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.699 19:33:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87908 00:17:12.699 killing process with pid 87908 00:17:12.700 19:33:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:12.700 19:33:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:12.700 19:33:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87908' 00:17:12.700 19:33:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87908 00:17:12.700 19:33:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87908 00:17:12.700 19:33:01 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:12.700 [2024-07-15 19:32:44.483291] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:17:12.700 [2024-07-15 19:32:44.483439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87908 ] 00:17:12.700 [2024-07-15 19:32:44.623956] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.700 [2024-07-15 19:32:44.693555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.700 Running I/O for 15 seconds... 00:17:12.700 [2024-07-15 19:32:47.537605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.537652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.537679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.537696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.537712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.537727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.537742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.537756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.537772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.537786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.537802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.537816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.537831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.537845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.537860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.537874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.537889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.537903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.537918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.537933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.537948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.537962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.538007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.538023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.538038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.538052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.538067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.538082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.538097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.538111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.538126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.538140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.538155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.538174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.538191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.538205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.538220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.538246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.538262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.538277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.538292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.538306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.538322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.538336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.538351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.538380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.538397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.538420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.700 [2024-07-15 19:32:47.538438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.700 [2024-07-15 19:32:47.538453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.538468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.538482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.538498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.538511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.538527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.538541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.538556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.538570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.538585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.538599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.538615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.538629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.538644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.538661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.538676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.538693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.538709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.538723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.538738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.538752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.538767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.538781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.538804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.538819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.538834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.538848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.538863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.538877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.538893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.538907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.538922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.538936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.538951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.538965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.538981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.538995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.539011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.539025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.539040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.539054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.539069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.539083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.539098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.539112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.539127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.539141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.539157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.539178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.539207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.539222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.539237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.539252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.539267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.539286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.539302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.701 [2024-07-15 19:32:47.539316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.701 [2024-07-15 19:32:47.539332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.702 [2024-07-15 19:32:47.539346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.702 [2024-07-15 19:32:47.539389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.702 [2024-07-15 19:32:47.539418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.702 [2024-07-15 19:32:47.539448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.702 [2024-07-15 19:32:47.539477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.702 [2024-07-15 19:32:47.539506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.702 [2024-07-15 19:32:47.539536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.702 [2024-07-15 19:32:47.539565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.702 [2024-07-15 19:32:47.539602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.702 [2024-07-15 19:32:47.539633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.702 [2024-07-15 19:32:47.539662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.702 [2024-07-15 19:32:47.539694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.702 [2024-07-15 19:32:47.539724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.539754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.539786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.539816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.539845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.539874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.539903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.539932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.539961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.539983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.539998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.540014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.540028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.540043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.540057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.540072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.540086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.540102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.540115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.540131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.540145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.540160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.540177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.540192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.540206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.540222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.540236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.540257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.540272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.540288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.540302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.540317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.540331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.540347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.540382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.702 [2024-07-15 19:32:47.540400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.702 [2024-07-15 19:32:47.540414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.540430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.540444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.540459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.540473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.540489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.540503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.540518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.540532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.540547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.540561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.540577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.540591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.540606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.540620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.540636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.540649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.540665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.540681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.540697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.540711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.540726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.540740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.540774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.540791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.540807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.540821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.540836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.540850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.540866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.540880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.540895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.540909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.540925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.540939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.540954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.540969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.540984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.540998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.703 [2024-07-15 19:32:47.541620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.703 [2024-07-15 19:32:47.541634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:47.541669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:12.704 [2024-07-15 19:32:47.541684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:12.704 [2024-07-15 19:32:47.541696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81968 len:8 PRP1 0x0 PRP2 0x0 00:17:12.704 [2024-07-15 19:32:47.541709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:47.541759] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xeecd60 was disconnected and freed. reset controller. 00:17:12.704 [2024-07-15 19:32:47.541779] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:12.704 [2024-07-15 19:32:47.541835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.704 [2024-07-15 19:32:47.541856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:47.541871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.704 [2024-07-15 19:32:47.541888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:47.541903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.704 [2024-07-15 19:32:47.541916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:47.541930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.704 [2024-07-15 19:32:47.541945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:47.541959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:12.704 [2024-07-15 19:32:47.545942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:12.704 [2024-07-15 19:32:47.545980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe7dfd0 (9): Bad file descriptor 00:17:12.704 [2024-07-15 19:32:47.578370] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:12.704 [2024-07-15 19:32:51.243869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.243917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.243945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.243961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.704 [2024-07-15 19:32:51.244921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.704 [2024-07-15 19:32:51.244935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.244951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.705 [2024-07-15 19:32:51.244966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.244981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.705 [2024-07-15 19:32:51.244995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.705 [2024-07-15 19:32:51.245024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.705 [2024-07-15 19:32:51.245054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.705 [2024-07-15 19:32:51.245084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.705 [2024-07-15 19:32:51.245113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.705 [2024-07-15 19:32:51.245143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.705 [2024-07-15 19:32:51.245173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.705 [2024-07-15 19:32:51.245209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.705 [2024-07-15 19:32:51.245239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.705 [2024-07-15 19:32:51.245268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.705 [2024-07-15 19:32:51.245297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.705 [2024-07-15 19:32:51.245326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.705 [2024-07-15 19:32:51.245366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.245974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.245988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.246004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.246019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.246034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.246048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.246064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.705 [2024-07-15 19:32:51.246077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.705 [2024-07-15 19:32:51.246093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.706 [2024-07-15 19:32:51.246865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.706 [2024-07-15 19:32:51.246894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.706 [2024-07-15 19:32:51.246932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.706 [2024-07-15 19:32:51.246962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.246978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.706 [2024-07-15 19:32:51.246991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.247007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.706 [2024-07-15 19:32:51.247021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.247036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.706 [2024-07-15 19:32:51.247050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.247065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.706 [2024-07-15 19:32:51.247079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.247094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.706 [2024-07-15 19:32:51.247108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.247124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.706 [2024-07-15 19:32:51.247137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.247159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.706 [2024-07-15 19:32:51.247174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.247190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.706 [2024-07-15 19:32:51.247203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.247219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.706 [2024-07-15 19:32:51.247233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.247249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.706 [2024-07-15 19:32:51.247263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.247278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.706 [2024-07-15 19:32:51.247292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.247307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.706 [2024-07-15 19:32:51.247321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.247336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.706 [2024-07-15 19:32:51.247353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.247384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.706 [2024-07-15 19:32:51.247398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.247414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.706 [2024-07-15 19:32:51.247430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.706 [2024-07-15 19:32:51.247446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.707 [2024-07-15 19:32:51.247460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.247476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.707 [2024-07-15 19:32:51.247489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.247505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.707 [2024-07-15 19:32:51.247519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.247535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.707 [2024-07-15 19:32:51.247556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.247573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.707 [2024-07-15 19:32:51.247588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.247603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.707 [2024-07-15 19:32:51.247617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.247633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.707 [2024-07-15 19:32:51.247647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.247662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.707 [2024-07-15 19:32:51.247676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.247692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.707 [2024-07-15 19:32:51.247705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.247721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.707 [2024-07-15 19:32:51.247735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.247750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.707 [2024-07-15 19:32:51.247764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.247780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.707 [2024-07-15 19:32:51.247793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.247809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.707 [2024-07-15 19:32:51.247823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.247838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.707 [2024-07-15 19:32:51.247855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.247871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.707 [2024-07-15 19:32:51.247885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.247900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099340 is same with the state(5) to be set 00:17:12.707 [2024-07-15 19:32:51.247920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:12.707 [2024-07-15 19:32:51.247930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:12.707 [2024-07-15 19:32:51.247948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85768 len:8 PRP1 0x0 PRP2 0x0 00:17:12.707 [2024-07-15 19:32:51.248026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.248083] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1099340 was disconnected and freed. reset controller. 00:17:12.707 [2024-07-15 19:32:51.248103] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:17:12.707 [2024-07-15 19:32:51.248165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.707 [2024-07-15 19:32:51.248186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.248202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.707 [2024-07-15 19:32:51.248215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.248229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.707 [2024-07-15 19:32:51.248243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.248257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.707 [2024-07-15 19:32:51.248271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:51.248285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:12.707 [2024-07-15 19:32:51.252295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:12.707 [2024-07-15 19:32:51.252343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe7dfd0 (9): Bad file descriptor 00:17:12.707 [2024-07-15 19:32:51.285952] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:12.707 [2024-07-15 19:32:55.856801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.707 [2024-07-15 19:32:55.856874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:55.856915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.707 [2024-07-15 19:32:55.856932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:55.856948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.707 [2024-07-15 19:32:55.856963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:55.856979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.707 [2024-07-15 19:32:55.856993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:55.857008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.707 [2024-07-15 19:32:55.857022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:55.857038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.707 [2024-07-15 19:32:55.857080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:55.857097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.707 [2024-07-15 19:32:55.857112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:55.857127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.707 [2024-07-15 19:32:55.857142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:55.857157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.707 [2024-07-15 19:32:55.857171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:55.857187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.707 [2024-07-15 19:32:55.857200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:55.857216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.707 [2024-07-15 19:32:55.857230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:55.857246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.707 [2024-07-15 19:32:55.857260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:55.857275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.707 [2024-07-15 19:32:55.857289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:55.857305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.707 [2024-07-15 19:32:55.857319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:55.857334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.707 [2024-07-15 19:32:55.857348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:55.857377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.707 [2024-07-15 19:32:55.857393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:55.857409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.707 [2024-07-15 19:32:55.857423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:55.857449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.707 [2024-07-15 19:32:55.857464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:55.857488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.707 [2024-07-15 19:32:55.857503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.707 [2024-07-15 19:32:55.857519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.708 [2024-07-15 19:32:55.857533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.857548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.708 [2024-07-15 19:32:55.857561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.857577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.708 [2024-07-15 19:32:55.857591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.857606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.708 [2024-07-15 19:32:55.857619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.857635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.708 [2024-07-15 19:32:55.857649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.857664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.708 [2024-07-15 19:32:55.857678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.857693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.708 [2024-07-15 19:32:55.857707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.857722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.708 [2024-07-15 19:32:55.857736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.857751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.708 [2024-07-15 19:32:55.857765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.857780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.708 [2024-07-15 19:32:55.857794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.857810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.708 [2024-07-15 19:32:55.857824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.857840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.708 [2024-07-15 19:32:55.857865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.857881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.708 [2024-07-15 19:32:55.857896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.857912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.708 [2024-07-15 19:32:55.857926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.857942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.708 [2024-07-15 19:32:55.857956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.857972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.857986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.708 [2024-07-15 19:32:55.858825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.708 [2024-07-15 19:32:55.858839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.858855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.709 [2024-07-15 19:32:55.858869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.858885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.709 [2024-07-15 19:32:55.858899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.858914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.709 [2024-07-15 19:32:55.858928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.858944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.709 [2024-07-15 19:32:55.858958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.858973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.709 [2024-07-15 19:32:55.858987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.709 [2024-07-15 19:32:55.859016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.709 [2024-07-15 19:32:55.859053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.709 [2024-07-15 19:32:55.859083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.709 [2024-07-15 19:32:55.859114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.709 [2024-07-15 19:32:55.859144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.709 [2024-07-15 19:32:55.859179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.709 [2024-07-15 19:32:55.859209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.709 [2024-07-15 19:32:55.859238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.709 [2024-07-15 19:32:55.859979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.709 [2024-07-15 19:32:55.859993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.710 [2024-07-15 19:32:55.860687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.710 [2024-07-15 19:32:55.860716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.710 [2024-07-15 19:32:55.860746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.710 [2024-07-15 19:32:55.860777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.710 [2024-07-15 19:32:55.860813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:12.710 [2024-07-15 19:32:55.860866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17864 len:8 PRP1 0x0 PRP2 0x0 00:17:12.710 [2024-07-15 19:32:55.860880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:12.710 [2024-07-15 19:32:55.860908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:12.710 [2024-07-15 19:32:55.860919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17872 len:8 PRP1 0x0 PRP2 0x0 00:17:12.710 [2024-07-15 19:32:55.860932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.860946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:12.710 [2024-07-15 19:32:55.860957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:12.710 [2024-07-15 19:32:55.860968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17880 len:8 PRP1 0x0 PRP2 0x0 00:17:12.710 [2024-07-15 19:32:55.860981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.861032] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1099bb0 was disconnected and freed. reset controller. 00:17:12.710 [2024-07-15 19:32:55.861052] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:17:12.710 [2024-07-15 19:32:55.861110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.710 [2024-07-15 19:32:55.861130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.861156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.710 [2024-07-15 19:32:55.861171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.861186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.710 [2024-07-15 19:32:55.861200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.861214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.710 [2024-07-15 19:32:55.861228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.710 [2024-07-15 19:32:55.861242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:12.710 [2024-07-15 19:32:55.861292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe7dfd0 (9): Bad file descriptor 00:17:12.710 [2024-07-15 19:32:55.865250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:12.710 [2024-07-15 19:32:55.901315] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:12.710 00:17:12.710 Latency(us) 00:17:12.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.710 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:12.710 Verification LBA range: start 0x0 length 0x4000 00:17:12.710 NVMe0n1 : 15.01 8573.85 33.49 197.55 0.00 14559.87 569.72 52667.11 00:17:12.710 =================================================================================================================== 00:17:12.710 Total : 8573.85 33.49 197.55 0.00 14559.87 569.72 52667.11 00:17:12.710 Received shutdown signal, test time was about 15.000000 seconds 00:17:12.710 00:17:12.710 Latency(us) 00:17:12.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.710 =================================================================================================================== 00:17:12.710 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:12.710 19:33:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:12.710 19:33:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:17:12.711 19:33:01 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:17:12.711 19:33:01 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88160 00:17:12.711 19:33:01 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:12.711 19:33:01 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88160 /var/tmp/bdevperf.sock 00:17:12.711 19:33:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88160 ']' 00:17:12.711 19:33:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:12.711 19:33:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.711 19:33:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:12.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:12.711 19:33:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.711 19:33:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:12.711 19:33:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.711 19:33:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:12.711 19:33:01 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:12.711 [2024-07-15 19:33:02.219639] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:12.711 19:33:02 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:12.711 [2024-07-15 19:33:02.459901] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:12.711 19:33:02 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:12.969 NVMe0n1 00:17:13.227 19:33:02 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:13.524 00:17:13.524 19:33:03 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:13.803 00:17:13.803 19:33:03 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:13.803 19:33:03 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:17:14.060 19:33:03 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:14.316 19:33:04 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:17:17.598 19:33:07 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:17.598 19:33:07 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:17:17.598 19:33:07 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88277 00:17:17.598 19:33:07 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:17.598 19:33:07 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 88277 00:17:18.975 0 00:17:18.975 19:33:08 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:18.975 [2024-07-15 19:33:01.666942] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:17:18.975 [2024-07-15 19:33:01.667051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88160 ] 00:17:18.975 [2024-07-15 19:33:01.802041] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.975 [2024-07-15 19:33:01.861596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.975 [2024-07-15 19:33:03.996847] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:18.975 [2024-07-15 19:33:03.996959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.975 [2024-07-15 19:33:03.996984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.975 [2024-07-15 19:33:03.997003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.975 [2024-07-15 19:33:03.997017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.975 [2024-07-15 19:33:03.997031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.975 [2024-07-15 19:33:03.997045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.975 [2024-07-15 19:33:03.997059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.975 [2024-07-15 19:33:03.997073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.975 [2024-07-15 19:33:03.997087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:18.975 [2024-07-15 19:33:03.997137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:18.975 [2024-07-15 19:33:03.997166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1686fd0 (9): Bad file descriptor 00:17:18.975 [2024-07-15 19:33:04.005102] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:18.975 Running I/O for 1 seconds... 00:17:18.975 00:17:18.975 Latency(us) 00:17:18.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.975 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:18.975 Verification LBA range: start 0x0 length 0x4000 00:17:18.975 NVMe0n1 : 1.01 8865.20 34.63 0.00 0.00 14360.10 2606.55 13285.93 00:17:18.975 =================================================================================================================== 00:17:18.975 Total : 8865.20 34.63 0.00 0.00 14360.10 2606.55 13285.93 00:17:18.975 19:33:08 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:18.975 19:33:08 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:17:18.975 19:33:08 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:19.238 19:33:09 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:19.238 19:33:09 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:17:19.505 19:33:09 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:20.074 19:33:09 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:17:23.358 19:33:12 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:17:23.358 19:33:12 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:23.358 19:33:12 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 88160 00:17:23.358 19:33:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88160 ']' 00:17:23.358 19:33:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88160 00:17:23.358 19:33:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:23.358 19:33:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:23.358 19:33:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88160 00:17:23.358 killing process with pid 88160 00:17:23.358 19:33:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:23.358 19:33:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:23.358 19:33:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88160' 00:17:23.358 19:33:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88160 00:17:23.358 19:33:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88160 00:17:23.358 19:33:13 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:17:23.358 19:33:13 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.616 19:33:13 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:23.616 19:33:13 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:23.616 19:33:13 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:17:23.616 19:33:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:23.616 19:33:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:17:23.616 19:33:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:23.616 19:33:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:17:23.616 19:33:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:23.616 19:33:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:23.875 rmmod nvme_tcp 00:17:23.875 rmmod nvme_fabrics 00:17:23.875 rmmod nvme_keyring 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 87804 ']' 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 87804 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87804 ']' 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87804 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87804 00:17:23.875 killing process with pid 87804 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87804' 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87804 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87804 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.875 19:33:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.134 19:33:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:24.134 00:17:24.134 real 0m32.096s 00:17:24.134 user 2m6.130s 00:17:24.134 sys 0m4.436s 00:17:24.134 19:33:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:24.134 19:33:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:24.134 ************************************ 00:17:24.134 END TEST nvmf_failover 00:17:24.134 ************************************ 00:17:24.134 19:33:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:24.134 19:33:13 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:24.134 19:33:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:24.134 19:33:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:24.134 19:33:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:24.134 ************************************ 00:17:24.134 START TEST nvmf_host_discovery 00:17:24.134 ************************************ 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:24.134 * Looking for test storage... 00:17:24.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:24.134 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:24.135 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:24.135 Cannot find device "nvmf_tgt_br" 00:17:24.135 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:17:24.135 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:24.135 Cannot find device "nvmf_tgt_br2" 00:17:24.135 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:17:24.135 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:24.135 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:24.135 Cannot find device "nvmf_tgt_br" 00:17:24.135 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:17:24.135 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:24.135 Cannot find device "nvmf_tgt_br2" 00:17:24.135 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:17:24.135 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:24.393 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:24.393 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:24.393 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:24.393 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:17:24.393 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:24.393 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:24.393 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:17:24.393 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:24.393 19:33:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:24.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:24.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:17:24.393 00:17:24.393 --- 10.0.0.2 ping statistics --- 00:17:24.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.393 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:24.393 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:24.393 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:17:24.393 00:17:24.393 --- 10.0.0.3 ping statistics --- 00:17:24.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.393 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:24.393 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:24.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:24.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:24.394 00:17:24.394 --- 10.0.0.1 ping statistics --- 00:17:24.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.394 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=88586 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 88586 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88586 ']' 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.652 19:33:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:24.652 [2024-07-15 19:33:14.300326] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:17:24.652 [2024-07-15 19:33:14.300448] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.652 [2024-07-15 19:33:14.446178] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.947 [2024-07-15 19:33:14.527924] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.947 [2024-07-15 19:33:14.528000] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.947 [2024-07-15 19:33:14.528015] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.947 [2024-07-15 19:33:14.528025] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.947 [2024-07-15 19:33:14.528035] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.947 [2024-07-15 19:33:14.528063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.549 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.549 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:17:25.549 19:33:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:25.549 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:25.549 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:25.549 19:33:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.549 19:33:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:25.549 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.549 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:25.549 [2024-07-15 19:33:15.311013] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.549 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.549 19:33:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:17:25.549 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.549 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:25.550 [2024-07-15 19:33:15.319095] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:25.550 null0 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:25.550 null1 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88636 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88636 /tmp/host.sock 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88636 ']' 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:25.550 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:25.550 19:33:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:25.809 [2024-07-15 19:33:15.409018] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:17:25.809 [2024-07-15 19:33:15.409160] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88636 ] 00:17:25.809 [2024-07-15 19:33:15.552934] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.809 [2024-07-15 19:33:15.611252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:17:26.744 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.002 [2024-07-15 19:33:16.727516] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:27.002 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:17:27.261 19:33:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:17:27.826 [2024-07-15 19:33:17.384360] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:27.826 [2024-07-15 19:33:17.384432] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:27.826 [2024-07-15 19:33:17.384455] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:27.826 [2024-07-15 19:33:17.472566] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:27.826 [2024-07-15 19:33:17.535803] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:27.826 [2024-07-15 19:33:17.535869] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:28.393 19:33:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:28.393 19:33:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:28.393 19:33:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:28.393 19:33:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:28.393 19:33:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:28.393 19:33:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.393 19:33:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:28.393 19:33:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:28.393 19:33:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:28.393 19:33:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:28.393 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:28.651 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:28.651 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:28.651 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:28.651 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:28.651 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.651 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:28.651 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:28.652 [2024-07-15 19:33:18.304285] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:28.652 [2024-07-15 19:33:18.305199] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:28.652 [2024-07-15 19:33:18.305238] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:28.652 [2024-07-15 19:33:18.391282] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:28.652 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.652 [2024-07-15 19:33:18.449636] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:28.652 [2024-07-15 19:33:18.449874] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:28.652 [2024-07-15 19:33:18.449998] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:28.910 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:17:28.910 19:33:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.845 [2024-07-15 19:33:19.601638] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:29.845 [2024-07-15 19:33:19.601678] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:29.845 [2024-07-15 19:33:19.603521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.845 [2024-07-15 19:33:19.603568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.845 [2024-07-15 19:33:19.603584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.845 [2024-07-15 19:33:19.603595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.845 [2024-07-15 19:33:19.603606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.845 [2024-07-15 19:33:19.603616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.845 [2024-07-15 19:33:19.603628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.845 [2024-07-15 19:33:19.603638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.845 [2024-07-15 19:33:19.603648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177ab20 is same with the state(5) to be set 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.845 [2024-07-15 19:33:19.613480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177ab20 (9): Bad file descriptor 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.845 [2024-07-15 19:33:19.623507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:29.845 [2024-07-15 19:33:19.623947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.845 [2024-07-15 19:33:19.624085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x177ab20 with addr=10.0.0.2, port=4420 00:17:29.845 [2024-07-15 19:33:19.624245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177ab20 is same with the state(5) to be set 00:17:29.845 [2024-07-15 19:33:19.624294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177ab20 (9): Bad file descriptor 00:17:29.845 [2024-07-15 19:33:19.624314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:29.845 [2024-07-15 19:33:19.624325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:29.845 [2024-07-15 19:33:19.624337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:29.845 [2024-07-15 19:33:19.624371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:29.845 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.845 [2024-07-15 19:33:19.633849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:29.845 [2024-07-15 19:33:19.633951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.845 [2024-07-15 19:33:19.633975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x177ab20 with addr=10.0.0.2, port=4420 00:17:29.845 [2024-07-15 19:33:19.633987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177ab20 is same with the state(5) to be set 00:17:29.845 [2024-07-15 19:33:19.634016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177ab20 (9): Bad file descriptor 00:17:29.845 [2024-07-15 19:33:19.634033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:29.845 [2024-07-15 19:33:19.634043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:29.845 [2024-07-15 19:33:19.634053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:29.845 [2024-07-15 19:33:19.634069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:29.845 [2024-07-15 19:33:19.643913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:29.845 [2024-07-15 19:33:19.644094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.845 [2024-07-15 19:33:19.644121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x177ab20 with addr=10.0.0.2, port=4420 00:17:29.845 [2024-07-15 19:33:19.644134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177ab20 is same with the state(5) to be set 00:17:29.845 [2024-07-15 19:33:19.644152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177ab20 (9): Bad file descriptor 00:17:29.845 [2024-07-15 19:33:19.644176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:29.845 [2024-07-15 19:33:19.644186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:29.845 [2024-07-15 19:33:19.644197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:29.845 [2024-07-15 19:33:19.644214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:30.104 [2024-07-15 19:33:19.653982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:30.104 [2024-07-15 19:33:19.654086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:30.104 [2024-07-15 19:33:19.654110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x177ab20 with addr=10.0.0.2, port=4420 00:17:30.104 [2024-07-15 19:33:19.654122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177ab20 is same with the state(5) to be set 00:17:30.104 [2024-07-15 19:33:19.654140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177ab20 (9): Bad file descriptor 00:17:30.104 [2024-07-15 19:33:19.654156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:30.104 [2024-07-15 19:33:19.654166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:30.104 [2024-07-15 19:33:19.654176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:30.104 [2024-07-15 19:33:19.654192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:30.104 [2024-07-15 19:33:19.664049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:30.104 [2024-07-15 19:33:19.664143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:30.104 [2024-07-15 19:33:19.664166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x177ab20 with addr=10.0.0.2, port=4420 00:17:30.104 [2024-07-15 19:33:19.664178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177ab20 is same with the state(5) to be set 00:17:30.104 [2024-07-15 19:33:19.664196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177ab20 (9): Bad file descriptor 00:17:30.104 [2024-07-15 19:33:19.664212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:30.104 [2024-07-15 19:33:19.664222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:30.104 [2024-07-15 19:33:19.664232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:30.104 [2024-07-15 19:33:19.664248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:30.104 [2024-07-15 19:33:19.674102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:30.104 [2024-07-15 19:33:19.674200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:30.104 [2024-07-15 19:33:19.674237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x177ab20 with addr=10.0.0.2, port=4420 00:17:30.104 [2024-07-15 19:33:19.674251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177ab20 is same with the state(5) to be set 00:17:30.104 [2024-07-15 19:33:19.674269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177ab20 (9): Bad file descriptor 00:17:30.104 [2024-07-15 19:33:19.674285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:30.104 [2024-07-15 19:33:19.674295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:30.104 [2024-07-15 19:33:19.674306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:30.104 [2024-07-15 19:33:19.674322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:30.104 [2024-07-15 19:33:19.684164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:30.104 [2024-07-15 19:33:19.684263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:30.104 [2024-07-15 19:33:19.684286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x177ab20 with addr=10.0.0.2, port=4420 00:17:30.104 [2024-07-15 19:33:19.684298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177ab20 is same with the state(5) to be set 00:17:30.104 [2024-07-15 19:33:19.684316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177ab20 (9): Bad file descriptor 00:17:30.104 [2024-07-15 19:33:19.684331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:30.104 [2024-07-15 19:33:19.684342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:30.104 [2024-07-15 19:33:19.684352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:30.104 [2024-07-15 19:33:19.684383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:30.104 [2024-07-15 19:33:19.687730] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:17:30.104 [2024-07-15 19:33:19.687761] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:17:30.104 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.105 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.363 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:17:30.363 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:30.363 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:17:30.363 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:17:30.363 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:30.363 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:30.363 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:17:30.363 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:30.363 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:30.363 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:30.363 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:30.363 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:30.363 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.363 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.363 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.364 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:17:30.364 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:30.364 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:17:30.364 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:17:30.364 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:30.364 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:30.364 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:30.364 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:30.364 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:30.364 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:30.364 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:30.364 19:33:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:30.364 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.364 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.364 19:33:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.364 19:33:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:17:30.364 19:33:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:17:30.364 19:33:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:30.364 19:33:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:30.364 19:33:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:30.364 19:33:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.364 19:33:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.299 [2024-07-15 19:33:21.045052] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:31.299 [2024-07-15 19:33:21.045095] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:31.299 [2024-07-15 19:33:21.045116] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:31.558 [2024-07-15 19:33:21.131184] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:17:31.558 [2024-07-15 19:33:21.191629] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:31.558 [2024-07-15 19:33:21.191688] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.558 2024/07/15 19:33:21 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:17:31.558 request: 00:17:31.558 { 00:17:31.558 "method": "bdev_nvme_start_discovery", 00:17:31.558 "params": { 00:17:31.558 "name": "nvme", 00:17:31.558 "trtype": "tcp", 00:17:31.558 "traddr": "10.0.0.2", 00:17:31.558 "adrfam": "ipv4", 00:17:31.558 "trsvcid": "8009", 00:17:31.558 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:31.558 "wait_for_attach": true 00:17:31.558 } 00:17:31.558 } 00:17:31.558 Got JSON-RPC error response 00:17:31.558 GoRPCClient: error on JSON-RPC call 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.558 2024/07/15 19:33:21 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:17:31.558 request: 00:17:31.558 { 00:17:31.558 "method": "bdev_nvme_start_discovery", 00:17:31.558 "params": { 00:17:31.558 "name": "nvme_second", 00:17:31.558 "trtype": "tcp", 00:17:31.558 "traddr": "10.0.0.2", 00:17:31.558 "adrfam": "ipv4", 00:17:31.558 "trsvcid": "8009", 00:17:31.558 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:31.558 "wait_for_attach": true 00:17:31.558 } 00:17:31.558 } 00:17:31.558 Got JSON-RPC error response 00:17:31.558 GoRPCClient: error on JSON-RPC call 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:31.558 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:31.559 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:31.559 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:17:31.559 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:31.559 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:31.559 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:31.559 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.559 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.559 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:31.559 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.817 19:33:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.751 [2024-07-15 19:33:22.452617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:32.751 [2024-07-15 19:33:22.452703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812130 with addr=10.0.0.2, port=8010 00:17:32.751 [2024-07-15 19:33:22.452728] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:32.751 [2024-07-15 19:33:22.452740] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:32.751 [2024-07-15 19:33:22.452751] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:33.685 [2024-07-15 19:33:23.452577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:33.685 [2024-07-15 19:33:23.452670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812130 with addr=10.0.0.2, port=8010 00:17:33.685 [2024-07-15 19:33:23.452694] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:33.685 [2024-07-15 19:33:23.452705] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:33.685 [2024-07-15 19:33:23.452715] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:35.058 [2024-07-15 19:33:24.452420] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:17:35.058 2024/07/15 19:33:24 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:17:35.058 request: 00:17:35.058 { 00:17:35.058 "method": "bdev_nvme_start_discovery", 00:17:35.058 "params": { 00:17:35.058 "name": "nvme_second", 00:17:35.058 "trtype": "tcp", 00:17:35.058 "traddr": "10.0.0.2", 00:17:35.058 "adrfam": "ipv4", 00:17:35.058 "trsvcid": "8010", 00:17:35.058 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:35.058 "wait_for_attach": false, 00:17:35.058 "attach_timeout_ms": 3000 00:17:35.058 } 00:17:35.058 } 00:17:35.058 Got JSON-RPC error response 00:17:35.058 GoRPCClient: error on JSON-RPC call 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88636 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:35.058 rmmod nvme_tcp 00:17:35.058 rmmod nvme_fabrics 00:17:35.058 rmmod nvme_keyring 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 88586 ']' 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 88586 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 88586 ']' 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 88586 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88586 00:17:35.058 killing process with pid 88586 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88586' 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 88586 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 88586 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:35.058 ************************************ 00:17:35.058 END TEST nvmf_host_discovery 00:17:35.058 ************************************ 00:17:35.058 00:17:35.058 real 0m11.092s 00:17:35.058 user 0m21.831s 00:17:35.058 sys 0m1.599s 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:35.058 19:33:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.317 19:33:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:35.317 19:33:24 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:35.317 19:33:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:35.317 19:33:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:35.317 19:33:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:35.317 ************************************ 00:17:35.317 START TEST nvmf_host_multipath_status 00:17:35.317 ************************************ 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:35.317 * Looking for test storage... 00:17:35.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:35.317 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:35.318 19:33:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:35.318 Cannot find device "nvmf_tgt_br" 00:17:35.318 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:17:35.318 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:35.318 Cannot find device "nvmf_tgt_br2" 00:17:35.318 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:17:35.318 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:35.318 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:35.318 Cannot find device "nvmf_tgt_br" 00:17:35.318 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:17:35.318 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:35.318 Cannot find device "nvmf_tgt_br2" 00:17:35.318 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:17:35.318 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:35.318 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:35.318 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:35.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.318 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:17:35.318 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:35.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.318 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:17:35.318 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:35.318 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:35.318 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:35.318 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:35.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:17:35.576 00:17:35.576 --- 10.0.0.2 ping statistics --- 00:17:35.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.576 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:35.576 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:35.576 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:17:35.576 00:17:35.576 --- 10.0.0.3 ping statistics --- 00:17:35.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.576 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:35.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:35.576 00:17:35.576 --- 10.0.0.1 ping statistics --- 00:17:35.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.576 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:35.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=89122 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 89122 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89122 ']' 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.576 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:35.576 [2024-07-15 19:33:25.350674] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:17:35.576 [2024-07-15 19:33:25.351280] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.835 [2024-07-15 19:33:25.487817] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:35.835 [2024-07-15 19:33:25.559180] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.835 [2024-07-15 19:33:25.559462] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.835 [2024-07-15 19:33:25.559575] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.835 [2024-07-15 19:33:25.559659] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.835 [2024-07-15 19:33:25.559745] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.835 [2024-07-15 19:33:25.559947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.835 [2024-07-15 19:33:25.559956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.094 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.094 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:17:36.094 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:36.094 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:36.094 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:36.094 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.094 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89122 00:17:36.094 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:36.353 [2024-07-15 19:33:25.959858] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.353 19:33:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:36.618 Malloc0 00:17:36.618 19:33:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:36.875 19:33:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:37.132 19:33:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.390 [2024-07-15 19:33:27.181926] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.648 19:33:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:37.648 [2024-07-15 19:33:27.418079] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:37.648 19:33:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89212 00:17:37.648 19:33:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:37.648 19:33:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:37.648 19:33:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89212 /var/tmp/bdevperf.sock 00:17:37.648 19:33:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89212 ']' 00:17:37.648 19:33:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.648 19:33:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:37.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.648 19:33:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.648 19:33:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:37.648 19:33:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:38.212 19:33:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.212 19:33:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:17:38.212 19:33:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:38.470 19:33:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:38.729 Nvme0n1 00:17:38.729 19:33:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:38.987 Nvme0n1 00:17:39.246 19:33:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:39.246 19:33:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:17:41.149 19:33:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:17:41.149 19:33:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:17:41.408 19:33:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:41.667 19:33:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:17:42.602 19:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:17:42.602 19:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:42.602 19:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:42.602 19:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.170 19:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:43.170 19:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:43.170 19:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.170 19:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:43.428 19:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:43.428 19:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:43.428 19:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:43.428 19:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.688 19:33:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:43.688 19:33:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:43.688 19:33:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:43.688 19:33:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.950 19:33:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:43.950 19:33:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:43.950 19:33:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:43.950 19:33:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.208 19:33:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:44.208 19:33:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:44.208 19:33:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.208 19:33:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:44.466 19:33:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:44.466 19:33:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:17:44.466 19:33:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:44.724 19:33:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:44.982 19:33:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:17:45.915 19:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:17:45.915 19:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:45.915 19:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:45.915 19:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:46.174 19:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:46.174 19:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:46.174 19:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.174 19:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:46.432 19:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:46.432 19:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:46.432 19:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:46.432 19:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.691 19:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:46.691 19:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:46.691 19:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.691 19:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:46.949 19:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:46.949 19:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:46.949 19:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.949 19:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:47.207 19:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:47.207 19:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:47.207 19:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:47.207 19:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:47.465 19:33:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:47.465 19:33:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:17:47.465 19:33:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:47.724 19:33:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:17:47.983 19:33:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:17:48.919 19:33:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:17:48.919 19:33:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:49.227 19:33:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:49.227 19:33:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:49.484 19:33:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:49.484 19:33:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:49.484 19:33:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:49.484 19:33:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:49.743 19:33:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:49.743 19:33:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:49.743 19:33:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:49.743 19:33:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.002 19:33:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:50.002 19:33:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:50.002 19:33:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.002 19:33:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:50.261 19:33:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:50.261 19:33:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:50.261 19:33:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:50.261 19:33:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.519 19:33:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:50.519 19:33:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:50.519 19:33:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.519 19:33:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:50.777 19:33:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:50.777 19:33:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:17:50.777 19:33:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:51.035 19:33:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:51.294 19:33:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:17:52.231 19:33:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:17:52.231 19:33:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:52.231 19:33:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:52.231 19:33:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:52.489 19:33:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:52.489 19:33:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:52.489 19:33:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:52.489 19:33:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:53.056 19:33:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:53.056 19:33:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:53.056 19:33:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:53.056 19:33:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:53.056 19:33:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:53.056 19:33:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:53.056 19:33:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:53.056 19:33:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:53.315 19:33:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:53.315 19:33:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:53.315 19:33:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:53.315 19:33:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:53.880 19:33:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:53.880 19:33:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:53.880 19:33:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:53.880 19:33:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:53.880 19:33:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:53.880 19:33:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:17:53.880 19:33:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:54.138 19:33:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:54.702 19:33:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:17:55.687 19:33:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:17:55.687 19:33:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:55.687 19:33:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:55.687 19:33:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:55.944 19:33:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:55.944 19:33:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:55.944 19:33:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:55.944 19:33:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:56.202 19:33:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:56.202 19:33:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:56.202 19:33:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:56.202 19:33:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:56.459 19:33:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:56.459 19:33:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:56.459 19:33:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:56.459 19:33:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:56.718 19:33:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:56.718 19:33:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:56.718 19:33:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:56.718 19:33:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:56.975 19:33:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:56.976 19:33:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:56.976 19:33:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:56.976 19:33:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.234 19:33:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:57.234 19:33:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:17:57.234 19:33:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:57.493 19:33:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:57.752 19:33:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:17:58.689 19:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:17:58.689 19:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:58.689 19:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.689 19:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:58.967 19:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:58.967 19:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:58.967 19:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.967 19:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:59.225 19:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:59.225 19:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:59.225 19:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:59.225 19:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:59.484 19:33:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:59.484 19:33:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:59.484 19:33:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:59.484 19:33:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:59.743 19:33:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:59.743 19:33:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:59.743 19:33:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:59.743 19:33:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:00.311 19:33:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:00.311 19:33:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:00.311 19:33:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:00.311 19:33:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.569 19:33:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:00.569 19:33:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:18:00.827 19:33:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:18:00.827 19:33:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:01.086 19:33:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:01.344 19:33:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:18:02.278 19:33:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:18:02.278 19:33:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:02.278 19:33:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:02.278 19:33:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:02.536 19:33:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:02.536 19:33:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:02.536 19:33:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:02.536 19:33:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:02.794 19:33:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:02.794 19:33:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:02.794 19:33:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:02.794 19:33:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:03.053 19:33:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:03.053 19:33:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:03.053 19:33:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:03.053 19:33:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:03.311 19:33:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:03.311 19:33:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:03.311 19:33:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:03.311 19:33:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:03.570 19:33:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:03.570 19:33:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:03.571 19:33:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:03.571 19:33:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.137 19:33:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:04.137 19:33:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:18:04.137 19:33:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:04.137 19:33:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:04.395 19:33:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:18:05.341 19:33:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:18:05.341 19:33:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:05.341 19:33:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:05.341 19:33:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:05.932 19:33:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:05.932 19:33:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:05.932 19:33:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:05.932 19:33:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:05.932 19:33:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:05.932 19:33:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:05.932 19:33:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:06.191 19:33:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:06.450 19:33:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:06.450 19:33:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:06.450 19:33:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:06.450 19:33:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:06.709 19:33:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:06.709 19:33:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:06.709 19:33:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:06.709 19:33:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:06.967 19:33:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:06.967 19:33:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:06.967 19:33:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:06.967 19:33:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:07.225 19:33:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:07.225 19:33:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:18:07.225 19:33:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:07.483 19:33:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:07.741 19:33:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:18:09.123 19:33:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:18:09.123 19:33:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:09.123 19:33:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:09.123 19:33:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:09.123 19:33:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:09.123 19:33:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:09.123 19:33:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:09.123 19:33:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:09.382 19:33:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:09.382 19:33:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:09.382 19:33:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:09.382 19:33:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:09.644 19:33:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:09.644 19:33:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:09.644 19:33:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:09.644 19:33:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:09.901 19:33:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:09.901 19:33:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:09.902 19:33:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:09.902 19:33:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:10.160 19:33:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:10.160 19:33:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:10.160 19:33:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:10.160 19:33:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:10.418 19:34:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:10.418 19:34:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:18:10.418 19:34:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:10.985 19:34:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:10.985 19:34:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:18:12.362 19:34:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:18:12.362 19:34:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:12.362 19:34:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.362 19:34:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:12.362 19:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:12.362 19:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:12.362 19:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.362 19:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:12.621 19:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:12.621 19:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:12.621 19:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.621 19:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:12.878 19:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:12.879 19:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:12.879 19:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.879 19:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:13.138 19:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:13.138 19:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:13.138 19:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:13.138 19:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:13.396 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:13.396 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:13.396 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:13.397 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:13.655 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:13.655 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89212 00:18:13.655 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89212 ']' 00:18:13.655 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89212 00:18:13.655 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:18:13.655 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.655 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89212 00:18:13.655 killing process with pid 89212 00:18:13.655 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:13.655 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:13.655 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89212' 00:18:13.655 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89212 00:18:13.655 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89212 00:18:13.655 Connection closed with partial response: 00:18:13.655 00:18:13.655 00:18:13.928 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89212 00:18:13.928 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:13.928 [2024-07-15 19:33:27.489246] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:18:13.928 [2024-07-15 19:33:27.489372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89212 ] 00:18:13.929 [2024-07-15 19:33:27.623744] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.929 [2024-07-15 19:33:27.697301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.929 Running I/O for 90 seconds... 00:18:13.929 [2024-07-15 19:33:43.889480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.889579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.889645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.889669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.889692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.889708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.889730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.889745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.889766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.889781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.889803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.889817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.889839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.889854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.889875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.889890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.889977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.890968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.890984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.891007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.891022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.891046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.891061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.891086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.891102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.892226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.892254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.892284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.892301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.892326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.929 [2024-07-15 19:33:43.892341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:13.929 [2024-07-15 19:33:43.892398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.930 [2024-07-15 19:33:43.892417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.892442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.930 [2024-07-15 19:33:43.892458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.892483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.930 [2024-07-15 19:33:43.892499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.892523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.892539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.892563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.892578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.892612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.892627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.892652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.892667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.892692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.892708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.892732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.892747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.892779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.892796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.892821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.892836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.892860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.892875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.892900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.892924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.892949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.892965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.892990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.893962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.893989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.894005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.894031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.894047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.894085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.894102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.894129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.894144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.894171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.894187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:13.930 [2024-07-15 19:33:43.894226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.930 [2024-07-15 19:33:43.894244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.894272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.894288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.894323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.894339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.894379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.894397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.894425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.894441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.894468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.894484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.894512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.894528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.894555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.894571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.894598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.894613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.894650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.894667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.894695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.894711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.894738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.931 [2024-07-15 19:33:43.894753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.894780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.931 [2024-07-15 19:33:43.894796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.894824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.894839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.894866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.894881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.894909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.894924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.894951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.894967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.894994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.931 [2024-07-15 19:33:43.895544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.895969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.895985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.896013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.896030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.896057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.931 [2024-07-15 19:33:43.896073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:13.931 [2024-07-15 19:33:43.896100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:33:43.896116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:33:43.896142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:33:43.896158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:33:43.896185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:33:43.896200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:33:43.896228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:33:43.896243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:33:43.896270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:33:43.896292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:33:43.896321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:33:43.896337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:33:43.896379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:33:43.896398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:33:43.896425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:33:43.896441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:33:43.896468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:33:43.896484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:33:43.896511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:33:43.896527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:33:43.896556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:33:43.896572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:33:43.896600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:33:43.896616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.742231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.742289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.742327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.742377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.742417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.742481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.742516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.742552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.742588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:34:00.742623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:34:00.742658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:34:00.742694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.742730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:34:00.742774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:34:00.742810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:34:00.742847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.742883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.742931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.742968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.742989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.743004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.743025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.743040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.743061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.743076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.743097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.743112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.743133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.743148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.743169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.743184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.743204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.743220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.743241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.743256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.743277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.932 [2024-07-15 19:34:00.743292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.743313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:34:00.743328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:13.932 [2024-07-15 19:34:00.743349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.932 [2024-07-15 19:34:00.743386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.743411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.933 [2024-07-15 19:34:00.743427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.743448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.933 [2024-07-15 19:34:00.743463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.743484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.933 [2024-07-15 19:34:00.743499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.743521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.933 [2024-07-15 19:34:00.743536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.743558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.933 [2024-07-15 19:34:00.743573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.743594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.933 [2024-07-15 19:34:00.743609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.743630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.933 [2024-07-15 19:34:00.743645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.743666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.933 [2024-07-15 19:34:00.743681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.743702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.933 [2024-07-15 19:34:00.743716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.743737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.933 [2024-07-15 19:34:00.743752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.743774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.933 [2024-07-15 19:34:00.743789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.744991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.933 [2024-07-15 19:34:00.745033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.745064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.933 [2024-07-15 19:34:00.745081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.745103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.933 [2024-07-15 19:34:00.745118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.745139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.933 [2024-07-15 19:34:00.745154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.745177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.933 [2024-07-15 19:34:00.745192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.745214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.933 [2024-07-15 19:34:00.745229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.745250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.933 [2024-07-15 19:34:00.745265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.745286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.933 [2024-07-15 19:34:00.745301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.745322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.933 [2024-07-15 19:34:00.745338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.745376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.933 [2024-07-15 19:34:00.745394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.745417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.933 [2024-07-15 19:34:00.745433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.746404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.933 [2024-07-15 19:34:00.746434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.746462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.933 [2024-07-15 19:34:00.746478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.746513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.933 [2024-07-15 19:34:00.746529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.746550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.933 [2024-07-15 19:34:00.746565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.746586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.933 [2024-07-15 19:34:00.746602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.746623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.933 [2024-07-15 19:34:00.746638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.746659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.933 [2024-07-15 19:34:00.746674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.746695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.933 [2024-07-15 19:34:00.746710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.746731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.933 [2024-07-15 19:34:00.746746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.746767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.933 [2024-07-15 19:34:00.746782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.746804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.933 [2024-07-15 19:34:00.746819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:13.933 [2024-07-15 19:34:00.746841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.934 [2024-07-15 19:34:00.746855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.746876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.746891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.746913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.746927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.746956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.746972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.746994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.747009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.747045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.747081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.747117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.747153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.747189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.934 [2024-07-15 19:34:00.747224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.934 [2024-07-15 19:34:00.747261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.934 [2024-07-15 19:34:00.747297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.934 [2024-07-15 19:34:00.747333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.747384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.747430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.747468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.747504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.747542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.747578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.934 [2024-07-15 19:34:00.747614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.934 [2024-07-15 19:34:00.747651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.934 [2024-07-15 19:34:00.747687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.747723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.747744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.747759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.749312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.749340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.749383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.934 [2024-07-15 19:34:00.749403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.749426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.934 [2024-07-15 19:34:00.749453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.749483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.934 [2024-07-15 19:34:00.749498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.749519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.934 [2024-07-15 19:34:00.749535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.749556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.934 [2024-07-15 19:34:00.749571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.749592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.934 [2024-07-15 19:34:00.749606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.749628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.749642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.749664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.749679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.749701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.749716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.749737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.749752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.749774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.749789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.749810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.934 [2024-07-15 19:34:00.749825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.749847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.934 [2024-07-15 19:34:00.749861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.749882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.934 [2024-07-15 19:34:00.749897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.749926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.934 [2024-07-15 19:34:00.749942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.749963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.934 [2024-07-15 19:34:00.749978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:13.934 [2024-07-15 19:34:00.749999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.750014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.750036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.750051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.750072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.750086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.750108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.750123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.750144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.750159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.750180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.750194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.750226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.750244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.750266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.750281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.750303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.750318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.750340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.750355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.750399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.750416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.750437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.750453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.750474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.750489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.750510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.750525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.750546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.750565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.750586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.750601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.750623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.750638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.751449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.751478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.751505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.751521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.751543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.751559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.751580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.751595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.751617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.751632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.751653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.751679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.751713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.751729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.751750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.751765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.751787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.751802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.751823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.751837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.751859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.751874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.752277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.752303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.752330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.752347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.752388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.752406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.752428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.752444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.752465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.752480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.752501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.752516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.752537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.752567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.752590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.752605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.752627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.752642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.752664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.935 [2024-07-15 19:34:00.752679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.752700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.752715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.752736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.752751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.752771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.752786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.752808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.935 [2024-07-15 19:34:00.752822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:13.935 [2024-07-15 19:34:00.752843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.936 [2024-07-15 19:34:00.752859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.752880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.752895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.752916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.752930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.752951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.752966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.752988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.753003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.936 [2024-07-15 19:34:00.754106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.936 [2024-07-15 19:34:00.754150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.936 [2024-07-15 19:34:00.754187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.936 [2024-07-15 19:34:00.754236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.754273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.754310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.754346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.936 [2024-07-15 19:34:00.754400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.936 [2024-07-15 19:34:00.754436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.754472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.936 [2024-07-15 19:34:00.754508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.936 [2024-07-15 19:34:00.754544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.754624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.936 [2024-07-15 19:34:00.754660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.754696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.754732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.754768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.754804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.936 [2024-07-15 19:34:00.754847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.754884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.936 [2024-07-15 19:34:00.754920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.936 [2024-07-15 19:34:00.754955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.754977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.754992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.755013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.755028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.755057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.755073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.755094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.755109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.755130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.936 [2024-07-15 19:34:00.755145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.755166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.936 [2024-07-15 19:34:00.755181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.755202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.755217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.755238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.755253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.755274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.936 [2024-07-15 19:34:00.755289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.755310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.755324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.755345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.755373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.755397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.755413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.755437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.755453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:13.936 [2024-07-15 19:34:00.757236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.936 [2024-07-15 19:34:00.757266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.757295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.757324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.757348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.937 [2024-07-15 19:34:00.757379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.757404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.937 [2024-07-15 19:34:00.757420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.757442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.937 [2024-07-15 19:34:00.757457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.757477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.937 [2024-07-15 19:34:00.757493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.757514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.937 [2024-07-15 19:34:00.757529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.757550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.757565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.757587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.757602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.757623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.757637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.757659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.757673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.757694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.757709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.757730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.937 [2024-07-15 19:34:00.757745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.757766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.937 [2024-07-15 19:34:00.757789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.757813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.757828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.758733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.937 [2024-07-15 19:34:00.758766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.758796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.758813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.758835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.937 [2024-07-15 19:34:00.758850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.758871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.937 [2024-07-15 19:34:00.758886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.758908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.758923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.758944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.758958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.758980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.758994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.759015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.937 [2024-07-15 19:34:00.759030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.759051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.759066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.759087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.759102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.759123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.937 [2024-07-15 19:34:00.759137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.759172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.759188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.759209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.759224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.759245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.759260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.759281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.759296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.759317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.759331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.759353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.759384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.759407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.937 [2024-07-15 19:34:00.759422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.759443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.937 [2024-07-15 19:34:00.759458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.759479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.937 [2024-07-15 19:34:00.759494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.759516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.937 [2024-07-15 19:34:00.759531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:13.937 [2024-07-15 19:34:00.759552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.937 [2024-07-15 19:34:00.759567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.759588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.938 [2024-07-15 19:34:00.759603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.759633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.938 [2024-07-15 19:34:00.759649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.759671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.938 [2024-07-15 19:34:00.759686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.759707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.938 [2024-07-15 19:34:00.759722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.759743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.938 [2024-07-15 19:34:00.759758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.759779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.938 [2024-07-15 19:34:00.759794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.759815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.938 [2024-07-15 19:34:00.759829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.759850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.938 [2024-07-15 19:34:00.759865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.759887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.938 [2024-07-15 19:34:00.759902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.759923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.938 [2024-07-15 19:34:00.759937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.759958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.938 [2024-07-15 19:34:00.759973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.759995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.938 [2024-07-15 19:34:00.760010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.938 [2024-07-15 19:34:00.763191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.938 [2024-07-15 19:34:00.763250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.938 [2024-07-15 19:34:00.763290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.938 [2024-07-15 19:34:00.763327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.938 [2024-07-15 19:34:00.763379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.938 [2024-07-15 19:34:00.763418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.938 [2024-07-15 19:34:00.763454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.938 [2024-07-15 19:34:00.763492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.938 [2024-07-15 19:34:00.763528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.938 [2024-07-15 19:34:00.763564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.938 [2024-07-15 19:34:00.763600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.938 [2024-07-15 19:34:00.763636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.938 [2024-07-15 19:34:00.763672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.938 [2024-07-15 19:34:00.763718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.938 [2024-07-15 19:34:00.763755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.938 [2024-07-15 19:34:00.763792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.938 [2024-07-15 19:34:00.763828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.938 [2024-07-15 19:34:00.763864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.938 [2024-07-15 19:34:00.763901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.938 [2024-07-15 19:34:00.763937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.938 [2024-07-15 19:34:00.763973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.763994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.938 [2024-07-15 19:34:00.764009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.764030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.938 [2024-07-15 19:34:00.764045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:13.938 [2024-07-15 19:34:00.764066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.764081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.764102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.764117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.764138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.764153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.764182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.764198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.764219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.764234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.764255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.764269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.764291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.764305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.764326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.764341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.764375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.764394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.765914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.765943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.765971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.765987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.766025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.766062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.766097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.766134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.766185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.766234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.766273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.766308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.766344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.766396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.766433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.766469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.766504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.766539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.766575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.766611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.766657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.766695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.766731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.766767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.766803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.766839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.766875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.766911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.766947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.766968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.766983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.767004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.767019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.767040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.767054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.767075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.939 [2024-07-15 19:34:00.767098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.767120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.767136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.767157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.939 [2024-07-15 19:34:00.767172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:13.939 [2024-07-15 19:34:00.767194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.767209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.767230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.767244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.767265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.940 [2024-07-15 19:34:00.767280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.767301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.940 [2024-07-15 19:34:00.767316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.767338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.940 [2024-07-15 19:34:00.767352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.767388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.940 [2024-07-15 19:34:00.767405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.767426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.940 [2024-07-15 19:34:00.767441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.767462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.940 [2024-07-15 19:34:00.767477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.769062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.769092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.769121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.769138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.769173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.769190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.769211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.769226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.769247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.769262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.769284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.769299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.769321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.769336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.769372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.769391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.769413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.769429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.769451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.769466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.769487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.940 [2024-07-15 19:34:00.769502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.769523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.940 [2024-07-15 19:34:00.769539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.769560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.940 [2024-07-15 19:34:00.769575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.769596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.769611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.769645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.769661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.769682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.769697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.769719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.769734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.769755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.769770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.771215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.771243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.771271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.771288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.771309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.771324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.771346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.940 [2024-07-15 19:34:00.771375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.771400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.940 [2024-07-15 19:34:00.771416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.771437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.940 [2024-07-15 19:34:00.771452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.771475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.940 [2024-07-15 19:34:00.771490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.771512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.940 [2024-07-15 19:34:00.771527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.771548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.771576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.771599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.940 [2024-07-15 19:34:00.771614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:13.940 [2024-07-15 19:34:00.771636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.941 [2024-07-15 19:34:00.771651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.771672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.771688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.771709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.941 [2024-07-15 19:34:00.771724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.771745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.941 [2024-07-15 19:34:00.771760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.771782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.941 [2024-07-15 19:34:00.771797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.771818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.771833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.771854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.771869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.771890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.771905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.771927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.941 [2024-07-15 19:34:00.771942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.771963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.941 [2024-07-15 19:34:00.771978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.772000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.941 [2024-07-15 19:34:00.772022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.772045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.941 [2024-07-15 19:34:00.772061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.772082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.941 [2024-07-15 19:34:00.772097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.772118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.941 [2024-07-15 19:34:00.772133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.772155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.941 [2024-07-15 19:34:00.772169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.772191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.941 [2024-07-15 19:34:00.772205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.772227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.941 [2024-07-15 19:34:00.772242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.772263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.772278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.772299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.772314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.772335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.772350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.772388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.772405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.772426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.772442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.772463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.941 [2024-07-15 19:34:00.772488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.772510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.772525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.772547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.772562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.772584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.772599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.774527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.941 [2024-07-15 19:34:00.774557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.774586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.774605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.774628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.774643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.774664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.774679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.774701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.774716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.774738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.774752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.774773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.774788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.774809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.774825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.774846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.774861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.774897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.941 [2024-07-15 19:34:00.774913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.774935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.941 [2024-07-15 19:34:00.774950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:13.941 [2024-07-15 19:34:00.774971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.941 [2024-07-15 19:34:00.774986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.775007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.942 [2024-07-15 19:34:00.775022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.775044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.942 [2024-07-15 19:34:00.775058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.775079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.942 [2024-07-15 19:34:00.775094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.775116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.942 [2024-07-15 19:34:00.775131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.775152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.942 [2024-07-15 19:34:00.775168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.775189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.942 [2024-07-15 19:34:00.775204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.775225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.942 [2024-07-15 19:34:00.775240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.775261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.942 [2024-07-15 19:34:00.775276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.775298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.942 [2024-07-15 19:34:00.775313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.775342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.942 [2024-07-15 19:34:00.775371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.775396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.942 [2024-07-15 19:34:00.775411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.775433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.942 [2024-07-15 19:34:00.775448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.775469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.942 [2024-07-15 19:34:00.775484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.775505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.942 [2024-07-15 19:34:00.775520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.775541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.942 [2024-07-15 19:34:00.775556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.775577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.942 [2024-07-15 19:34:00.775592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.775613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.942 [2024-07-15 19:34:00.775627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.775649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.942 [2024-07-15 19:34:00.775663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.775685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.942 [2024-07-15 19:34:00.775700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.777417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.942 [2024-07-15 19:34:00.777446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.777473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.942 [2024-07-15 19:34:00.777489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.777511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.942 [2024-07-15 19:34:00.777539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.777562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.942 [2024-07-15 19:34:00.777578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.777599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.942 [2024-07-15 19:34:00.777614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.777636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.942 [2024-07-15 19:34:00.777651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.777672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.942 [2024-07-15 19:34:00.777687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.777708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.942 [2024-07-15 19:34:00.777723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.777745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.942 [2024-07-15 19:34:00.777760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.777781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.942 [2024-07-15 19:34:00.777796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.777817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.942 [2024-07-15 19:34:00.777831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.777853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.942 [2024-07-15 19:34:00.777867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:13.942 [2024-07-15 19:34:00.777888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.942 [2024-07-15 19:34:00.777903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.777925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.943 [2024-07-15 19:34:00.777940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.777960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.943 [2024-07-15 19:34:00.777984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.778007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.943 [2024-07-15 19:34:00.778022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.778043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.778058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.778080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.778094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.778116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.778130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.778151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.778166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.778187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.778202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.778237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.778253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.778275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.778290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.778311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.943 [2024-07-15 19:34:00.778326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.778347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.943 [2024-07-15 19:34:00.778375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.778399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.943 [2024-07-15 19:34:00.778415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.778436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.778451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.778482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.943 [2024-07-15 19:34:00.778498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.778520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.778535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.778556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.943 [2024-07-15 19:34:00.778571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.778593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.778608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.778630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.943 [2024-07-15 19:34:00.778646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.778667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.943 [2024-07-15 19:34:00.778682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.780245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.943 [2024-07-15 19:34:00.780286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.780317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.943 [2024-07-15 19:34:00.780334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.780371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.943 [2024-07-15 19:34:00.780390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.780415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.943 [2024-07-15 19:34:00.780430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.780452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.780467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.780488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.780503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.780538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.780555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.780576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.780591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.780613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.780628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.780649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.943 [2024-07-15 19:34:00.780664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.780685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.943 [2024-07-15 19:34:00.780700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.780721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.943 [2024-07-15 19:34:00.780736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.780757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.780773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.780794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.780809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.780830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.780845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.780866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.780881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.780902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.780917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.780939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.780953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.780975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.780998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:13.943 [2024-07-15 19:34:00.781020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.943 [2024-07-15 19:34:00.781035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.781057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.944 [2024-07-15 19:34:00.781072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.781093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.944 [2024-07-15 19:34:00.781108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.944 [2024-07-15 19:34:00.783185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.944 [2024-07-15 19:34:00.783231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.783268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.783305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.944 [2024-07-15 19:34:00.783341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.944 [2024-07-15 19:34:00.783394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.944 [2024-07-15 19:34:00.783431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.783468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.783517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.783555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.783592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.783628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.783664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.783700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.783736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.783773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.944 [2024-07-15 19:34:00.783809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.783845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.783881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.783918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.783954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.783983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.783998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.784020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.784034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.784056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.784071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.784092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.784107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.784129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.784143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.784165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.784180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.784201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.944 [2024-07-15 19:34:00.784217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.784238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.944 [2024-07-15 19:34:00.784253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.784275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.784289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.784311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.944 [2024-07-15 19:34:00.784326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.784347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.944 [2024-07-15 19:34:00.784378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.784402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.944 [2024-07-15 19:34:00.784417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.784448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.944 [2024-07-15 19:34:00.784464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.784486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.944 [2024-07-15 19:34:00.784501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.784523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.944 [2024-07-15 19:34:00.784538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:13.944 [2024-07-15 19:34:00.786043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.786089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.786123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.786140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.786163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.786179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.786200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.786227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.786250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.786266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.786288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.786303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.786324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.786339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.786380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.786399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.786422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.786438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.786460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.786489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.786513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.945 [2024-07-15 19:34:00.786528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.786550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.945 [2024-07-15 19:34:00.786565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.786586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.945 [2024-07-15 19:34:00.786602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.786623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.786639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.786660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.786675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.786696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.786711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.786733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.786748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.786771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.786787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.787571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.787609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.787639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.787656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.787678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.945 [2024-07-15 19:34:00.787693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.787715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.945 [2024-07-15 19:34:00.787745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.787769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.787784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.787806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.945 [2024-07-15 19:34:00.787821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.787842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.787857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.787878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.945 [2024-07-15 19:34:00.787893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.787915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.945 [2024-07-15 19:34:00.787930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.787951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.945 [2024-07-15 19:34:00.787966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.787987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.945 [2024-07-15 19:34:00.788002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.788023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.945 [2024-07-15 19:34:00.788038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.788059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.945 [2024-07-15 19:34:00.788074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.788096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.945 [2024-07-15 19:34:00.788111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.788132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.945 [2024-07-15 19:34:00.788146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.788168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.945 [2024-07-15 19:34:00.788183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.788212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.945 [2024-07-15 19:34:00.788228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.788249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.788264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.788285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.945 [2024-07-15 19:34:00.788300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.788322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.788336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.788383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.788401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.788423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.945 [2024-07-15 19:34:00.788439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:13.945 [2024-07-15 19:34:00.788461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.788475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.788497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.788512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.788533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.788548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.788569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.788584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.788606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.788621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.788643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.788659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.790075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.946 [2024-07-15 19:34:00.790116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.790147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.946 [2024-07-15 19:34:00.790164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.790186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.946 [2024-07-15 19:34:00.790201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.790239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.946 [2024-07-15 19:34:00.790255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.790277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.790291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.790313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.946 [2024-07-15 19:34:00.790327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.790349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.946 [2024-07-15 19:34:00.790378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.790402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.946 [2024-07-15 19:34:00.790417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.790439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.946 [2024-07-15 19:34:00.790454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.790475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.946 [2024-07-15 19:34:00.790490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.790511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.790526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.790548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.946 [2024-07-15 19:34:00.790563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.790598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.946 [2024-07-15 19:34:00.790615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.790637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.946 [2024-07-15 19:34:00.790653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.791119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.791147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.791173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.791190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.791211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.791228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.791250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.791265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.791287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.791302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.791323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.791339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.791376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.791394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.791417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.791432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.791454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.791469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.791491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.791505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.791527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.791555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.791578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.946 [2024-07-15 19:34:00.791594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.791615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.791630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.791652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.791666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.791688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.791703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:13.946 [2024-07-15 19:34:00.791724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.946 [2024-07-15 19:34:00.791739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.791761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.947 [2024-07-15 19:34:00.791775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.791797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.947 [2024-07-15 19:34:00.791812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.791833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.947 [2024-07-15 19:34:00.791848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.791869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.791885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.791906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.791921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.791942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.791957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.791978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.947 [2024-07-15 19:34:00.792000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.792023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.947 [2024-07-15 19:34:00.792038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.792060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.947 [2024-07-15 19:34:00.792075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.792900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.792928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.792955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.792972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.792995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.793010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.793031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.793046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.793067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.793083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.793104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.793119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.793141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.793156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.793177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.793192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.793213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.793228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.793249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.947 [2024-07-15 19:34:00.793264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.793298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.947 [2024-07-15 19:34:00.793314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.793336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.947 [2024-07-15 19:34:00.793351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.793388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.793405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.793426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.793441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.793462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.793477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.793499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.793514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.793535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.793550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.793571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.793586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.793608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.793623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.794038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.947 [2024-07-15 19:34:00.794065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.794091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.947 [2024-07-15 19:34:00.794108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.794129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.947 [2024-07-15 19:34:00.794145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.794178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.947 [2024-07-15 19:34:00.794206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.794242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.947 [2024-07-15 19:34:00.794257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.794279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.947 [2024-07-15 19:34:00.794294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.794316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.794331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.794352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.947 [2024-07-15 19:34:00.794383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.794408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.947 [2024-07-15 19:34:00.794424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.794445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.947 [2024-07-15 19:34:00.794460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.794481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.794497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.794518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.947 [2024-07-15 19:34:00.794533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.794555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.947 [2024-07-15 19:34:00.794570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:13.947 [2024-07-15 19:34:00.795005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.948 [2024-07-15 19:34:00.795032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:13.948 [2024-07-15 19:34:00.795075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.948 [2024-07-15 19:34:00.795094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:13.948 [2024-07-15 19:34:00.795117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.948 [2024-07-15 19:34:00.795145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:13.948 [2024-07-15 19:34:00.795169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.948 [2024-07-15 19:34:00.795184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:13.948 [2024-07-15 19:34:00.795205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.948 [2024-07-15 19:34:00.795220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:13.948 [2024-07-15 19:34:00.795241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.948 [2024-07-15 19:34:00.795256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:13.948 [2024-07-15 19:34:00.795277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.948 [2024-07-15 19:34:00.795292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:13.948 [2024-07-15 19:34:00.795313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.948 [2024-07-15 19:34:00.795328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:13.948 [2024-07-15 19:34:00.795349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.948 [2024-07-15 19:34:00.795380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:13.948 [2024-07-15 19:34:00.795404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.948 [2024-07-15 19:34:00.795419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:13.948 [2024-07-15 19:34:00.795441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.948 [2024-07-15 19:34:00.795457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:13.948 [2024-07-15 19:34:00.795478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.948 [2024-07-15 19:34:00.795493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:13.948 [2024-07-15 19:34:00.795514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.948 [2024-07-15 19:34:00.795528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:13.948 [2024-07-15 19:34:00.795550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.948 [2024-07-15 19:34:00.795565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:13.948 [2024-07-15 19:34:00.795586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.948 [2024-07-15 19:34:00.795609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:13.948 [2024-07-15 19:34:00.795632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.948 [2024-07-15 19:34:00.795647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:13.948 [2024-07-15 19:34:00.795669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.948 [2024-07-15 19:34:00.795684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:13.948 Received shutdown signal, test time was about 34.469983 seconds 00:18:13.948 00:18:13.948 Latency(us) 00:18:13.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.948 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:13.948 Verification LBA range: start 0x0 length 0x4000 00:18:13.948 Nvme0n1 : 34.47 8338.41 32.57 0.00 0.00 15318.91 1079.85 4026531.84 00:18:13.948 =================================================================================================================== 00:18:13.948 Total : 8338.41 32.57 0.00 0.00 15318.91 1079.85 4026531.84 00:18:13.948 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:14.207 rmmod nvme_tcp 00:18:14.207 rmmod nvme_fabrics 00:18:14.207 rmmod nvme_keyring 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 89122 ']' 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 89122 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89122 ']' 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89122 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89122 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:14.207 killing process with pid 89122 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89122' 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89122 00:18:14.207 19:34:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89122 00:18:14.466 19:34:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:14.466 19:34:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:14.466 19:34:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:14.466 19:34:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:14.466 19:34:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:14.466 19:34:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.466 19:34:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.466 19:34:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.466 19:34:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:14.466 00:18:14.466 real 0m39.324s 00:18:14.466 user 2m10.272s 00:18:14.466 sys 0m9.444s 00:18:14.466 19:34:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:14.466 19:34:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:14.466 ************************************ 00:18:14.466 END TEST nvmf_host_multipath_status 00:18:14.466 ************************************ 00:18:14.466 19:34:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:14.466 19:34:04 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:14.466 19:34:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:14.466 19:34:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:14.466 19:34:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:14.466 ************************************ 00:18:14.466 START TEST nvmf_discovery_remove_ifc 00:18:14.466 ************************************ 00:18:14.466 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:14.725 * Looking for test storage... 00:18:14.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:14.725 Cannot find device "nvmf_tgt_br" 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:14.725 Cannot find device "nvmf_tgt_br2" 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:14.725 Cannot find device "nvmf_tgt_br" 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:14.725 Cannot find device "nvmf_tgt_br2" 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:14.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:14.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:14.725 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:14.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:18:14.984 00:18:14.984 --- 10.0.0.2 ping statistics --- 00:18:14.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.984 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:14.984 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:14.984 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:18:14.984 00:18:14.984 --- 10.0.0.3 ping statistics --- 00:18:14.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.984 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:14.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:18:14.984 00:18:14.984 --- 10.0.0.1 ping statistics --- 00:18:14.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.984 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:14.984 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.985 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:14.985 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:14.985 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.985 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:14.985 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:14.985 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:14.985 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:14.985 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:14.985 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:14.985 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=90506 00:18:14.985 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:14.985 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 90506 00:18:14.985 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90506 ']' 00:18:14.985 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.985 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.985 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.985 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.985 19:34:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:14.985 [2024-07-15 19:34:04.785584] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:18:14.985 [2024-07-15 19:34:04.785680] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.243 [2024-07-15 19:34:04.926406] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.243 [2024-07-15 19:34:04.996557] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.243 [2024-07-15 19:34:04.996614] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.243 [2024-07-15 19:34:04.996628] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.243 [2024-07-15 19:34:04.996637] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.243 [2024-07-15 19:34:04.996646] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.243 [2024-07-15 19:34:04.996681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.224 19:34:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.224 19:34:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:18:16.224 19:34:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:16.224 19:34:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:16.224 19:34:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:16.224 19:34:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.224 19:34:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:16.224 19:34:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.224 19:34:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:16.224 [2024-07-15 19:34:05.823437] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.224 [2024-07-15 19:34:05.831553] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:16.224 null0 00:18:16.224 [2024-07-15 19:34:05.863474] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.224 19:34:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.224 19:34:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90562 00:18:16.224 19:34:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:16.225 19:34:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90562 /tmp/host.sock 00:18:16.225 19:34:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90562 ']' 00:18:16.225 19:34:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:18:16.225 19:34:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.225 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:16.225 19:34:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:16.225 19:34:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.225 19:34:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:16.225 [2024-07-15 19:34:05.937230] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:18:16.225 [2024-07-15 19:34:05.937322] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90562 ] 00:18:16.482 [2024-07-15 19:34:06.076649] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.482 [2024-07-15 19:34:06.150153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.482 19:34:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.482 19:34:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:18:16.482 19:34:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:16.482 19:34:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:16.482 19:34:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.482 19:34:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:16.482 19:34:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.482 19:34:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:16.482 19:34:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.483 19:34:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:16.483 19:34:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.483 19:34:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:16.483 19:34:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.483 19:34:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:17.853 [2024-07-15 19:34:07.286881] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:17.853 [2024-07-15 19:34:07.286946] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:17.853 [2024-07-15 19:34:07.286980] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:17.853 [2024-07-15 19:34:07.373032] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:17.853 [2024-07-15 19:34:07.430085] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:17.853 [2024-07-15 19:34:07.430178] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:17.853 [2024-07-15 19:34:07.430220] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:17.853 [2024-07-15 19:34:07.430244] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:17.853 [2024-07-15 19:34:07.430274] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:17.853 [2024-07-15 19:34:07.435120] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb948c0 was disconnected and freed. delete nvme_qpair. 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:17.853 19:34:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:18.786 19:34:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:18.786 19:34:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:18.786 19:34:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.786 19:34:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:18.786 19:34:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:18.786 19:34:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:18.786 19:34:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:19.043 19:34:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.043 19:34:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:19.043 19:34:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:19.975 19:34:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:19.975 19:34:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:19.975 19:34:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.975 19:34:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:19.975 19:34:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:19.975 19:34:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:19.975 19:34:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:19.975 19:34:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.975 19:34:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:19.975 19:34:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:20.907 19:34:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:20.907 19:34:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:20.907 19:34:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:20.907 19:34:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:20.907 19:34:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.907 19:34:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:20.907 19:34:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:21.227 19:34:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.227 19:34:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:21.227 19:34:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:22.168 19:34:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:22.168 19:34:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:22.169 19:34:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.169 19:34:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:22.169 19:34:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:22.169 19:34:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:22.169 19:34:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:22.169 19:34:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.169 19:34:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:22.169 19:34:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:23.104 19:34:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:23.104 19:34:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:23.104 19:34:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.104 19:34:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:23.104 19:34:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:23.104 19:34:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:23.104 19:34:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:23.104 19:34:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.104 19:34:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:23.104 19:34:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:23.104 [2024-07-15 19:34:12.857951] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:18:23.104 [2024-07-15 19:34:12.858015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.104 [2024-07-15 19:34:12.858039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.104 [2024-07-15 19:34:12.858053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.104 [2024-07-15 19:34:12.858063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.104 [2024-07-15 19:34:12.858073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.104 [2024-07-15 19:34:12.858082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.104 [2024-07-15 19:34:12.858092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.104 [2024-07-15 19:34:12.858101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.104 [2024-07-15 19:34:12.858111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.104 [2024-07-15 19:34:12.858120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.104 [2024-07-15 19:34:12.858129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5daa0 is same with the state(5) to be set 00:18:23.104 [2024-07-15 19:34:12.867943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5daa0 (9): Bad file descriptor 00:18:23.104 [2024-07-15 19:34:12.877966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:24.478 19:34:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:24.478 19:34:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:24.478 19:34:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:24.478 19:34:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:24.478 19:34:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:24.478 19:34:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.478 19:34:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:24.478 [2024-07-15 19:34:13.906468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:18:24.478 [2024-07-15 19:34:13.906592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5daa0 with addr=10.0.0.2, port=4420 00:18:24.478 [2024-07-15 19:34:13.906627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5daa0 is same with the state(5) to be set 00:18:24.478 [2024-07-15 19:34:13.906700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5daa0 (9): Bad file descriptor 00:18:24.478 [2024-07-15 19:34:13.907600] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:24.478 [2024-07-15 19:34:13.907652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:24.478 [2024-07-15 19:34:13.907674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:24.478 [2024-07-15 19:34:13.907696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:24.478 [2024-07-15 19:34:13.907737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:24.478 [2024-07-15 19:34:13.907760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:24.478 19:34:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.478 19:34:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:24.478 19:34:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:25.412 [2024-07-15 19:34:14.907845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:25.412 [2024-07-15 19:34:14.907910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:25.412 [2024-07-15 19:34:14.907922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:25.412 [2024-07-15 19:34:14.907932] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:18:25.412 [2024-07-15 19:34:14.907956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:25.412 [2024-07-15 19:34:14.907986] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:18:25.412 [2024-07-15 19:34:14.908053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.412 [2024-07-15 19:34:14.908069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:25.412 [2024-07-15 19:34:14.908084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.412 [2024-07-15 19:34:14.908093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:25.412 [2024-07-15 19:34:14.908103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.412 [2024-07-15 19:34:14.908112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:25.412 [2024-07-15 19:34:14.908123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.412 [2024-07-15 19:34:14.908132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:25.412 [2024-07-15 19:34:14.908142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.412 [2024-07-15 19:34:14.908151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:25.412 [2024-07-15 19:34:14.908161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:18:25.412 [2024-07-15 19:34:14.908181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb00540 (9): Bad file descriptor 00:18:25.412 [2024-07-15 19:34:14.908972] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:25.412 [2024-07-15 19:34:14.908989] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:18:25.412 19:34:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:25.412 19:34:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:25.412 19:34:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:25.412 19:34:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.412 19:34:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:25.412 19:34:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:25.412 19:34:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:25.412 19:34:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.412 19:34:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:25.412 19:34:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:25.412 19:34:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:25.412 19:34:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:25.412 19:34:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:25.412 19:34:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:25.412 19:34:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:25.412 19:34:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.412 19:34:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:25.412 19:34:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:25.412 19:34:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:25.412 19:34:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.412 19:34:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:25.412 19:34:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:26.348 19:34:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:26.348 19:34:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:26.348 19:34:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.348 19:34:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:26.348 19:34:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:26.348 19:34:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:26.348 19:34:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:26.348 19:34:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.348 19:34:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:26.348 19:34:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:27.283 [2024-07-15 19:34:16.918445] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:27.283 [2024-07-15 19:34:16.918494] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:27.283 [2024-07-15 19:34:16.918517] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:27.283 [2024-07-15 19:34:17.004596] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:18:27.283 [2024-07-15 19:34:17.060871] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:27.283 [2024-07-15 19:34:17.060953] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:27.283 [2024-07-15 19:34:17.060980] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:27.283 [2024-07-15 19:34:17.061003] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:18:27.284 [2024-07-15 19:34:17.061015] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:27.284 [2024-07-15 19:34:17.067138] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb72860 was disconnected and freed. delete nvme_qpair. 00:18:27.542 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:27.542 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:27.542 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90562 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90562 ']' 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90562 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90562 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90562' 00:18:27.543 killing process with pid 90562 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90562 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90562 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:27.543 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:18:27.802 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:27.802 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:18:27.802 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:27.802 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:27.802 rmmod nvme_tcp 00:18:27.802 rmmod nvme_fabrics 00:18:27.802 rmmod nvme_keyring 00:18:27.802 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:27.802 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:18:27.802 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:18:27.802 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 90506 ']' 00:18:27.802 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 90506 00:18:27.802 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90506 ']' 00:18:27.802 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90506 00:18:27.802 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:18:27.802 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:27.802 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90506 00:18:27.802 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:27.802 killing process with pid 90506 00:18:27.802 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:27.802 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90506' 00:18:27.802 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90506 00:18:27.802 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90506 00:18:28.061 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:28.061 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:28.061 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:28.061 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:28.061 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:28.061 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.061 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.061 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.061 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:28.061 00:18:28.061 real 0m13.412s 00:18:28.061 user 0m23.791s 00:18:28.061 sys 0m1.471s 00:18:28.061 ************************************ 00:18:28.061 END TEST nvmf_discovery_remove_ifc 00:18:28.061 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:28.061 19:34:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:28.061 ************************************ 00:18:28.061 19:34:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:28.062 19:34:17 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:28.062 19:34:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:28.062 19:34:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:28.062 19:34:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:28.062 ************************************ 00:18:28.062 START TEST nvmf_identify_kernel_target 00:18:28.062 ************************************ 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:28.062 * Looking for test storage... 00:18:28.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:28.062 Cannot find device "nvmf_tgt_br" 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:18:28.062 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:28.321 Cannot find device "nvmf_tgt_br2" 00:18:28.321 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:18:28.321 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:28.321 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:28.321 Cannot find device "nvmf_tgt_br" 00:18:28.321 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:18:28.321 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:28.321 Cannot find device "nvmf_tgt_br2" 00:18:28.321 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:18:28.321 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:28.321 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:28.321 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:28.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:28.321 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:18:28.321 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:28.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:28.321 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:18:28.321 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:28.321 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:28.321 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:28.321 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:28.321 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:28.321 19:34:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:28.321 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:28.321 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:28.321 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:28.321 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:28.321 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:28.321 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:28.321 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:28.321 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:28.321 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:28.321 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:28.321 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:28.321 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:28.321 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:28.321 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:28.321 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:28.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:18:28.580 00:18:28.580 --- 10.0.0.2 ping statistics --- 00:18:28.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.580 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:28.580 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:28.580 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:18:28.580 00:18:28.580 --- 10.0.0.3 ping statistics --- 00:18:28.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.580 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:28.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:18:28.580 00:18:28.580 --- 10.0.0.1 ping statistics --- 00:18:28.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.580 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:28.580 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:28.839 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:28.839 Waiting for block devices as requested 00:18:28.839 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:29.097 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:29.097 No valid GPT data, bailing 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:29.097 No valid GPT data, bailing 00:18:29.097 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:29.356 No valid GPT data, bailing 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:18:29.356 19:34:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:29.356 No valid GPT data, bailing 00:18:29.356 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:29.356 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:29.356 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:29.356 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:18:29.356 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:18:29.356 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:29.356 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:29.356 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:29.356 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:29.356 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:18:29.356 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:18:29.356 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:18:29.356 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:18:29.356 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:18:29.356 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:18:29.356 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:18:29.356 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:29.356 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -a 10.0.0.1 -t tcp -s 4420 00:18:29.356 00:18:29.356 Discovery Log Number of Records 2, Generation counter 2 00:18:29.356 =====Discovery Log Entry 0====== 00:18:29.356 trtype: tcp 00:18:29.356 adrfam: ipv4 00:18:29.356 subtype: current discovery subsystem 00:18:29.356 treq: not specified, sq flow control disable supported 00:18:29.356 portid: 1 00:18:29.356 trsvcid: 4420 00:18:29.356 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:29.356 traddr: 10.0.0.1 00:18:29.356 eflags: none 00:18:29.356 sectype: none 00:18:29.356 =====Discovery Log Entry 1====== 00:18:29.356 trtype: tcp 00:18:29.356 adrfam: ipv4 00:18:29.356 subtype: nvme subsystem 00:18:29.356 treq: not specified, sq flow control disable supported 00:18:29.356 portid: 1 00:18:29.356 trsvcid: 4420 00:18:29.356 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:29.356 traddr: 10.0.0.1 00:18:29.356 eflags: none 00:18:29.357 sectype: none 00:18:29.357 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:29.357 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:29.615 ===================================================== 00:18:29.615 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:29.615 ===================================================== 00:18:29.615 Controller Capabilities/Features 00:18:29.615 ================================ 00:18:29.615 Vendor ID: 0000 00:18:29.615 Subsystem Vendor ID: 0000 00:18:29.615 Serial Number: d2e955767a1405c649af 00:18:29.615 Model Number: Linux 00:18:29.615 Firmware Version: 6.7.0-68 00:18:29.615 Recommended Arb Burst: 0 00:18:29.615 IEEE OUI Identifier: 00 00 00 00:18:29.615 Multi-path I/O 00:18:29.615 May have multiple subsystem ports: No 00:18:29.615 May have multiple controllers: No 00:18:29.615 Associated with SR-IOV VF: No 00:18:29.615 Max Data Transfer Size: Unlimited 00:18:29.615 Max Number of Namespaces: 0 00:18:29.615 Max Number of I/O Queues: 1024 00:18:29.615 NVMe Specification Version (VS): 1.3 00:18:29.615 NVMe Specification Version (Identify): 1.3 00:18:29.615 Maximum Queue Entries: 1024 00:18:29.615 Contiguous Queues Required: No 00:18:29.615 Arbitration Mechanisms Supported 00:18:29.615 Weighted Round Robin: Not Supported 00:18:29.615 Vendor Specific: Not Supported 00:18:29.615 Reset Timeout: 7500 ms 00:18:29.615 Doorbell Stride: 4 bytes 00:18:29.615 NVM Subsystem Reset: Not Supported 00:18:29.615 Command Sets Supported 00:18:29.615 NVM Command Set: Supported 00:18:29.615 Boot Partition: Not Supported 00:18:29.615 Memory Page Size Minimum: 4096 bytes 00:18:29.615 Memory Page Size Maximum: 4096 bytes 00:18:29.615 Persistent Memory Region: Not Supported 00:18:29.615 Optional Asynchronous Events Supported 00:18:29.615 Namespace Attribute Notices: Not Supported 00:18:29.615 Firmware Activation Notices: Not Supported 00:18:29.615 ANA Change Notices: Not Supported 00:18:29.615 PLE Aggregate Log Change Notices: Not Supported 00:18:29.615 LBA Status Info Alert Notices: Not Supported 00:18:29.615 EGE Aggregate Log Change Notices: Not Supported 00:18:29.615 Normal NVM Subsystem Shutdown event: Not Supported 00:18:29.615 Zone Descriptor Change Notices: Not Supported 00:18:29.615 Discovery Log Change Notices: Supported 00:18:29.615 Controller Attributes 00:18:29.616 128-bit Host Identifier: Not Supported 00:18:29.616 Non-Operational Permissive Mode: Not Supported 00:18:29.616 NVM Sets: Not Supported 00:18:29.616 Read Recovery Levels: Not Supported 00:18:29.616 Endurance Groups: Not Supported 00:18:29.616 Predictable Latency Mode: Not Supported 00:18:29.616 Traffic Based Keep ALive: Not Supported 00:18:29.616 Namespace Granularity: Not Supported 00:18:29.616 SQ Associations: Not Supported 00:18:29.616 UUID List: Not Supported 00:18:29.616 Multi-Domain Subsystem: Not Supported 00:18:29.616 Fixed Capacity Management: Not Supported 00:18:29.616 Variable Capacity Management: Not Supported 00:18:29.616 Delete Endurance Group: Not Supported 00:18:29.616 Delete NVM Set: Not Supported 00:18:29.616 Extended LBA Formats Supported: Not Supported 00:18:29.616 Flexible Data Placement Supported: Not Supported 00:18:29.616 00:18:29.616 Controller Memory Buffer Support 00:18:29.616 ================================ 00:18:29.616 Supported: No 00:18:29.616 00:18:29.616 Persistent Memory Region Support 00:18:29.616 ================================ 00:18:29.616 Supported: No 00:18:29.616 00:18:29.616 Admin Command Set Attributes 00:18:29.616 ============================ 00:18:29.616 Security Send/Receive: Not Supported 00:18:29.616 Format NVM: Not Supported 00:18:29.616 Firmware Activate/Download: Not Supported 00:18:29.616 Namespace Management: Not Supported 00:18:29.616 Device Self-Test: Not Supported 00:18:29.616 Directives: Not Supported 00:18:29.616 NVMe-MI: Not Supported 00:18:29.616 Virtualization Management: Not Supported 00:18:29.616 Doorbell Buffer Config: Not Supported 00:18:29.616 Get LBA Status Capability: Not Supported 00:18:29.616 Command & Feature Lockdown Capability: Not Supported 00:18:29.616 Abort Command Limit: 1 00:18:29.616 Async Event Request Limit: 1 00:18:29.616 Number of Firmware Slots: N/A 00:18:29.616 Firmware Slot 1 Read-Only: N/A 00:18:29.616 Firmware Activation Without Reset: N/A 00:18:29.616 Multiple Update Detection Support: N/A 00:18:29.616 Firmware Update Granularity: No Information Provided 00:18:29.616 Per-Namespace SMART Log: No 00:18:29.616 Asymmetric Namespace Access Log Page: Not Supported 00:18:29.616 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:29.616 Command Effects Log Page: Not Supported 00:18:29.616 Get Log Page Extended Data: Supported 00:18:29.616 Telemetry Log Pages: Not Supported 00:18:29.616 Persistent Event Log Pages: Not Supported 00:18:29.616 Supported Log Pages Log Page: May Support 00:18:29.616 Commands Supported & Effects Log Page: Not Supported 00:18:29.616 Feature Identifiers & Effects Log Page:May Support 00:18:29.616 NVMe-MI Commands & Effects Log Page: May Support 00:18:29.616 Data Area 4 for Telemetry Log: Not Supported 00:18:29.616 Error Log Page Entries Supported: 1 00:18:29.616 Keep Alive: Not Supported 00:18:29.616 00:18:29.616 NVM Command Set Attributes 00:18:29.616 ========================== 00:18:29.616 Submission Queue Entry Size 00:18:29.616 Max: 1 00:18:29.616 Min: 1 00:18:29.616 Completion Queue Entry Size 00:18:29.616 Max: 1 00:18:29.616 Min: 1 00:18:29.616 Number of Namespaces: 0 00:18:29.616 Compare Command: Not Supported 00:18:29.616 Write Uncorrectable Command: Not Supported 00:18:29.616 Dataset Management Command: Not Supported 00:18:29.616 Write Zeroes Command: Not Supported 00:18:29.616 Set Features Save Field: Not Supported 00:18:29.616 Reservations: Not Supported 00:18:29.616 Timestamp: Not Supported 00:18:29.616 Copy: Not Supported 00:18:29.616 Volatile Write Cache: Not Present 00:18:29.616 Atomic Write Unit (Normal): 1 00:18:29.616 Atomic Write Unit (PFail): 1 00:18:29.616 Atomic Compare & Write Unit: 1 00:18:29.616 Fused Compare & Write: Not Supported 00:18:29.616 Scatter-Gather List 00:18:29.616 SGL Command Set: Supported 00:18:29.616 SGL Keyed: Not Supported 00:18:29.616 SGL Bit Bucket Descriptor: Not Supported 00:18:29.616 SGL Metadata Pointer: Not Supported 00:18:29.616 Oversized SGL: Not Supported 00:18:29.616 SGL Metadata Address: Not Supported 00:18:29.616 SGL Offset: Supported 00:18:29.616 Transport SGL Data Block: Not Supported 00:18:29.616 Replay Protected Memory Block: Not Supported 00:18:29.616 00:18:29.616 Firmware Slot Information 00:18:29.616 ========================= 00:18:29.616 Active slot: 0 00:18:29.616 00:18:29.616 00:18:29.616 Error Log 00:18:29.616 ========= 00:18:29.616 00:18:29.616 Active Namespaces 00:18:29.616 ================= 00:18:29.616 Discovery Log Page 00:18:29.616 ================== 00:18:29.616 Generation Counter: 2 00:18:29.616 Number of Records: 2 00:18:29.616 Record Format: 0 00:18:29.616 00:18:29.616 Discovery Log Entry 0 00:18:29.616 ---------------------- 00:18:29.616 Transport Type: 3 (TCP) 00:18:29.616 Address Family: 1 (IPv4) 00:18:29.616 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:29.616 Entry Flags: 00:18:29.616 Duplicate Returned Information: 0 00:18:29.616 Explicit Persistent Connection Support for Discovery: 0 00:18:29.616 Transport Requirements: 00:18:29.616 Secure Channel: Not Specified 00:18:29.616 Port ID: 1 (0x0001) 00:18:29.616 Controller ID: 65535 (0xffff) 00:18:29.616 Admin Max SQ Size: 32 00:18:29.616 Transport Service Identifier: 4420 00:18:29.616 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:29.616 Transport Address: 10.0.0.1 00:18:29.616 Discovery Log Entry 1 00:18:29.616 ---------------------- 00:18:29.616 Transport Type: 3 (TCP) 00:18:29.616 Address Family: 1 (IPv4) 00:18:29.616 Subsystem Type: 2 (NVM Subsystem) 00:18:29.616 Entry Flags: 00:18:29.616 Duplicate Returned Information: 0 00:18:29.616 Explicit Persistent Connection Support for Discovery: 0 00:18:29.616 Transport Requirements: 00:18:29.616 Secure Channel: Not Specified 00:18:29.616 Port ID: 1 (0x0001) 00:18:29.616 Controller ID: 65535 (0xffff) 00:18:29.616 Admin Max SQ Size: 32 00:18:29.616 Transport Service Identifier: 4420 00:18:29.616 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:29.616 Transport Address: 10.0.0.1 00:18:29.616 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:29.875 get_feature(0x01) failed 00:18:29.875 get_feature(0x02) failed 00:18:29.875 get_feature(0x04) failed 00:18:29.875 ===================================================== 00:18:29.875 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:29.875 ===================================================== 00:18:29.875 Controller Capabilities/Features 00:18:29.875 ================================ 00:18:29.875 Vendor ID: 0000 00:18:29.875 Subsystem Vendor ID: 0000 00:18:29.875 Serial Number: 86faaa47c40f749fe5d6 00:18:29.875 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:29.875 Firmware Version: 6.7.0-68 00:18:29.875 Recommended Arb Burst: 6 00:18:29.875 IEEE OUI Identifier: 00 00 00 00:18:29.875 Multi-path I/O 00:18:29.875 May have multiple subsystem ports: Yes 00:18:29.875 May have multiple controllers: Yes 00:18:29.875 Associated with SR-IOV VF: No 00:18:29.875 Max Data Transfer Size: Unlimited 00:18:29.875 Max Number of Namespaces: 1024 00:18:29.875 Max Number of I/O Queues: 128 00:18:29.875 NVMe Specification Version (VS): 1.3 00:18:29.875 NVMe Specification Version (Identify): 1.3 00:18:29.875 Maximum Queue Entries: 1024 00:18:29.875 Contiguous Queues Required: No 00:18:29.875 Arbitration Mechanisms Supported 00:18:29.875 Weighted Round Robin: Not Supported 00:18:29.875 Vendor Specific: Not Supported 00:18:29.875 Reset Timeout: 7500 ms 00:18:29.875 Doorbell Stride: 4 bytes 00:18:29.875 NVM Subsystem Reset: Not Supported 00:18:29.875 Command Sets Supported 00:18:29.875 NVM Command Set: Supported 00:18:29.875 Boot Partition: Not Supported 00:18:29.875 Memory Page Size Minimum: 4096 bytes 00:18:29.875 Memory Page Size Maximum: 4096 bytes 00:18:29.875 Persistent Memory Region: Not Supported 00:18:29.875 Optional Asynchronous Events Supported 00:18:29.875 Namespace Attribute Notices: Supported 00:18:29.875 Firmware Activation Notices: Not Supported 00:18:29.875 ANA Change Notices: Supported 00:18:29.875 PLE Aggregate Log Change Notices: Not Supported 00:18:29.875 LBA Status Info Alert Notices: Not Supported 00:18:29.875 EGE Aggregate Log Change Notices: Not Supported 00:18:29.875 Normal NVM Subsystem Shutdown event: Not Supported 00:18:29.875 Zone Descriptor Change Notices: Not Supported 00:18:29.875 Discovery Log Change Notices: Not Supported 00:18:29.875 Controller Attributes 00:18:29.875 128-bit Host Identifier: Supported 00:18:29.875 Non-Operational Permissive Mode: Not Supported 00:18:29.875 NVM Sets: Not Supported 00:18:29.875 Read Recovery Levels: Not Supported 00:18:29.875 Endurance Groups: Not Supported 00:18:29.875 Predictable Latency Mode: Not Supported 00:18:29.875 Traffic Based Keep ALive: Supported 00:18:29.875 Namespace Granularity: Not Supported 00:18:29.875 SQ Associations: Not Supported 00:18:29.875 UUID List: Not Supported 00:18:29.875 Multi-Domain Subsystem: Not Supported 00:18:29.876 Fixed Capacity Management: Not Supported 00:18:29.876 Variable Capacity Management: Not Supported 00:18:29.876 Delete Endurance Group: Not Supported 00:18:29.876 Delete NVM Set: Not Supported 00:18:29.876 Extended LBA Formats Supported: Not Supported 00:18:29.876 Flexible Data Placement Supported: Not Supported 00:18:29.876 00:18:29.876 Controller Memory Buffer Support 00:18:29.876 ================================ 00:18:29.876 Supported: No 00:18:29.876 00:18:29.876 Persistent Memory Region Support 00:18:29.876 ================================ 00:18:29.876 Supported: No 00:18:29.876 00:18:29.876 Admin Command Set Attributes 00:18:29.876 ============================ 00:18:29.876 Security Send/Receive: Not Supported 00:18:29.876 Format NVM: Not Supported 00:18:29.876 Firmware Activate/Download: Not Supported 00:18:29.876 Namespace Management: Not Supported 00:18:29.876 Device Self-Test: Not Supported 00:18:29.876 Directives: Not Supported 00:18:29.876 NVMe-MI: Not Supported 00:18:29.876 Virtualization Management: Not Supported 00:18:29.876 Doorbell Buffer Config: Not Supported 00:18:29.876 Get LBA Status Capability: Not Supported 00:18:29.876 Command & Feature Lockdown Capability: Not Supported 00:18:29.876 Abort Command Limit: 4 00:18:29.876 Async Event Request Limit: 4 00:18:29.876 Number of Firmware Slots: N/A 00:18:29.876 Firmware Slot 1 Read-Only: N/A 00:18:29.876 Firmware Activation Without Reset: N/A 00:18:29.876 Multiple Update Detection Support: N/A 00:18:29.876 Firmware Update Granularity: No Information Provided 00:18:29.876 Per-Namespace SMART Log: Yes 00:18:29.876 Asymmetric Namespace Access Log Page: Supported 00:18:29.876 ANA Transition Time : 10 sec 00:18:29.876 00:18:29.876 Asymmetric Namespace Access Capabilities 00:18:29.876 ANA Optimized State : Supported 00:18:29.876 ANA Non-Optimized State : Supported 00:18:29.876 ANA Inaccessible State : Supported 00:18:29.876 ANA Persistent Loss State : Supported 00:18:29.876 ANA Change State : Supported 00:18:29.876 ANAGRPID is not changed : No 00:18:29.876 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:29.876 00:18:29.876 ANA Group Identifier Maximum : 128 00:18:29.876 Number of ANA Group Identifiers : 128 00:18:29.876 Max Number of Allowed Namespaces : 1024 00:18:29.876 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:29.876 Command Effects Log Page: Supported 00:18:29.876 Get Log Page Extended Data: Supported 00:18:29.876 Telemetry Log Pages: Not Supported 00:18:29.876 Persistent Event Log Pages: Not Supported 00:18:29.876 Supported Log Pages Log Page: May Support 00:18:29.876 Commands Supported & Effects Log Page: Not Supported 00:18:29.876 Feature Identifiers & Effects Log Page:May Support 00:18:29.876 NVMe-MI Commands & Effects Log Page: May Support 00:18:29.876 Data Area 4 for Telemetry Log: Not Supported 00:18:29.876 Error Log Page Entries Supported: 128 00:18:29.876 Keep Alive: Supported 00:18:29.876 Keep Alive Granularity: 1000 ms 00:18:29.876 00:18:29.876 NVM Command Set Attributes 00:18:29.876 ========================== 00:18:29.876 Submission Queue Entry Size 00:18:29.876 Max: 64 00:18:29.876 Min: 64 00:18:29.876 Completion Queue Entry Size 00:18:29.876 Max: 16 00:18:29.876 Min: 16 00:18:29.876 Number of Namespaces: 1024 00:18:29.876 Compare Command: Not Supported 00:18:29.876 Write Uncorrectable Command: Not Supported 00:18:29.876 Dataset Management Command: Supported 00:18:29.876 Write Zeroes Command: Supported 00:18:29.876 Set Features Save Field: Not Supported 00:18:29.876 Reservations: Not Supported 00:18:29.876 Timestamp: Not Supported 00:18:29.876 Copy: Not Supported 00:18:29.876 Volatile Write Cache: Present 00:18:29.876 Atomic Write Unit (Normal): 1 00:18:29.876 Atomic Write Unit (PFail): 1 00:18:29.876 Atomic Compare & Write Unit: 1 00:18:29.876 Fused Compare & Write: Not Supported 00:18:29.876 Scatter-Gather List 00:18:29.876 SGL Command Set: Supported 00:18:29.876 SGL Keyed: Not Supported 00:18:29.876 SGL Bit Bucket Descriptor: Not Supported 00:18:29.876 SGL Metadata Pointer: Not Supported 00:18:29.876 Oversized SGL: Not Supported 00:18:29.876 SGL Metadata Address: Not Supported 00:18:29.876 SGL Offset: Supported 00:18:29.876 Transport SGL Data Block: Not Supported 00:18:29.876 Replay Protected Memory Block: Not Supported 00:18:29.876 00:18:29.876 Firmware Slot Information 00:18:29.876 ========================= 00:18:29.876 Active slot: 0 00:18:29.876 00:18:29.876 Asymmetric Namespace Access 00:18:29.876 =========================== 00:18:29.876 Change Count : 0 00:18:29.876 Number of ANA Group Descriptors : 1 00:18:29.876 ANA Group Descriptor : 0 00:18:29.876 ANA Group ID : 1 00:18:29.876 Number of NSID Values : 1 00:18:29.876 Change Count : 0 00:18:29.876 ANA State : 1 00:18:29.876 Namespace Identifier : 1 00:18:29.876 00:18:29.876 Commands Supported and Effects 00:18:29.876 ============================== 00:18:29.876 Admin Commands 00:18:29.876 -------------- 00:18:29.876 Get Log Page (02h): Supported 00:18:29.876 Identify (06h): Supported 00:18:29.876 Abort (08h): Supported 00:18:29.876 Set Features (09h): Supported 00:18:29.876 Get Features (0Ah): Supported 00:18:29.876 Asynchronous Event Request (0Ch): Supported 00:18:29.876 Keep Alive (18h): Supported 00:18:29.876 I/O Commands 00:18:29.876 ------------ 00:18:29.876 Flush (00h): Supported 00:18:29.876 Write (01h): Supported LBA-Change 00:18:29.876 Read (02h): Supported 00:18:29.876 Write Zeroes (08h): Supported LBA-Change 00:18:29.876 Dataset Management (09h): Supported 00:18:29.876 00:18:29.876 Error Log 00:18:29.876 ========= 00:18:29.876 Entry: 0 00:18:29.876 Error Count: 0x3 00:18:29.876 Submission Queue Id: 0x0 00:18:29.876 Command Id: 0x5 00:18:29.876 Phase Bit: 0 00:18:29.876 Status Code: 0x2 00:18:29.876 Status Code Type: 0x0 00:18:29.876 Do Not Retry: 1 00:18:29.876 Error Location: 0x28 00:18:29.876 LBA: 0x0 00:18:29.876 Namespace: 0x0 00:18:29.876 Vendor Log Page: 0x0 00:18:29.876 ----------- 00:18:29.876 Entry: 1 00:18:29.876 Error Count: 0x2 00:18:29.876 Submission Queue Id: 0x0 00:18:29.876 Command Id: 0x5 00:18:29.876 Phase Bit: 0 00:18:29.876 Status Code: 0x2 00:18:29.876 Status Code Type: 0x0 00:18:29.876 Do Not Retry: 1 00:18:29.876 Error Location: 0x28 00:18:29.876 LBA: 0x0 00:18:29.876 Namespace: 0x0 00:18:29.876 Vendor Log Page: 0x0 00:18:29.876 ----------- 00:18:29.876 Entry: 2 00:18:29.876 Error Count: 0x1 00:18:29.876 Submission Queue Id: 0x0 00:18:29.876 Command Id: 0x4 00:18:29.876 Phase Bit: 0 00:18:29.876 Status Code: 0x2 00:18:29.876 Status Code Type: 0x0 00:18:29.876 Do Not Retry: 1 00:18:29.876 Error Location: 0x28 00:18:29.876 LBA: 0x0 00:18:29.876 Namespace: 0x0 00:18:29.876 Vendor Log Page: 0x0 00:18:29.876 00:18:29.876 Number of Queues 00:18:29.876 ================ 00:18:29.876 Number of I/O Submission Queues: 128 00:18:29.876 Number of I/O Completion Queues: 128 00:18:29.876 00:18:29.876 ZNS Specific Controller Data 00:18:29.876 ============================ 00:18:29.876 Zone Append Size Limit: 0 00:18:29.876 00:18:29.876 00:18:29.876 Active Namespaces 00:18:29.876 ================= 00:18:29.876 get_feature(0x05) failed 00:18:29.876 Namespace ID:1 00:18:29.876 Command Set Identifier: NVM (00h) 00:18:29.876 Deallocate: Supported 00:18:29.876 Deallocated/Unwritten Error: Not Supported 00:18:29.876 Deallocated Read Value: Unknown 00:18:29.876 Deallocate in Write Zeroes: Not Supported 00:18:29.876 Deallocated Guard Field: 0xFFFF 00:18:29.877 Flush: Supported 00:18:29.877 Reservation: Not Supported 00:18:29.877 Namespace Sharing Capabilities: Multiple Controllers 00:18:29.877 Size (in LBAs): 1310720 (5GiB) 00:18:29.877 Capacity (in LBAs): 1310720 (5GiB) 00:18:29.877 Utilization (in LBAs): 1310720 (5GiB) 00:18:29.877 UUID: 8c62882e-8679-4d5f-84ee-bd7ffbd42245 00:18:29.877 Thin Provisioning: Not Supported 00:18:29.877 Per-NS Atomic Units: Yes 00:18:29.877 Atomic Boundary Size (Normal): 0 00:18:29.877 Atomic Boundary Size (PFail): 0 00:18:29.877 Atomic Boundary Offset: 0 00:18:29.877 NGUID/EUI64 Never Reused: No 00:18:29.877 ANA group ID: 1 00:18:29.877 Namespace Write Protected: No 00:18:29.877 Number of LBA Formats: 1 00:18:29.877 Current LBA Format: LBA Format #00 00:18:29.877 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:18:29.877 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:29.877 rmmod nvme_tcp 00:18:29.877 rmmod nvme_fabrics 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:18:29.877 19:34:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:30.811 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:30.811 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:30.811 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:30.811 ************************************ 00:18:30.811 END TEST nvmf_identify_kernel_target 00:18:30.811 ************************************ 00:18:30.811 00:18:30.811 real 0m2.784s 00:18:30.811 user 0m1.009s 00:18:30.811 sys 0m1.290s 00:18:30.811 19:34:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:30.811 19:34:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.811 19:34:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:30.811 19:34:20 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:30.811 19:34:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:30.811 19:34:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:30.811 19:34:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:30.811 ************************************ 00:18:30.811 START TEST nvmf_auth_host 00:18:30.811 ************************************ 00:18:30.811 19:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:31.070 * Looking for test storage... 00:18:31.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:31.070 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:31.071 Cannot find device "nvmf_tgt_br" 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:31.071 Cannot find device "nvmf_tgt_br2" 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:31.071 Cannot find device "nvmf_tgt_br" 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:31.071 Cannot find device "nvmf_tgt_br2" 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:31.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:31.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:31.071 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:31.330 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:31.330 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:31.330 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:31.330 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:31.330 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:31.330 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:31.330 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:31.330 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:31.330 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:31.330 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:31.330 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:31.330 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:31.330 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:31.330 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:31.330 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:31.330 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:31.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:18:31.330 00:18:31.330 --- 10.0.0.2 ping statistics --- 00:18:31.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.330 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:31.330 19:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:31.330 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:31.330 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:18:31.330 00:18:31.330 --- 10.0.0.3 ping statistics --- 00:18:31.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.330 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:31.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:31.330 00:18:31.330 --- 10.0.0.1 ping statistics --- 00:18:31.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.330 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=91430 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 91430 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91430 ']' 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.330 19:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.898 19:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:31.898 19:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:18:31.898 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:31.898 19:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:31.898 19:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.898 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.898 19:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=48f9ad7d7874d2f54692889b94444478 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fm0 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 48f9ad7d7874d2f54692889b94444478 0 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 48f9ad7d7874d2f54692889b94444478 0 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=48f9ad7d7874d2f54692889b94444478 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fm0 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fm0 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.fm0 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8877bd754548874c212cc9de5be3492ddea78cb3bb841248d349bc88be5d19b5 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Gr5 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8877bd754548874c212cc9de5be3492ddea78cb3bb841248d349bc88be5d19b5 3 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8877bd754548874c212cc9de5be3492ddea78cb3bb841248d349bc88be5d19b5 3 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8877bd754548874c212cc9de5be3492ddea78cb3bb841248d349bc88be5d19b5 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Gr5 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Gr5 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Gr5 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1fcd79ef8a319c87581f2c8081f6d5b3281d9bf0b19cca1a 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.23F 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1fcd79ef8a319c87581f2c8081f6d5b3281d9bf0b19cca1a 0 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1fcd79ef8a319c87581f2c8081f6d5b3281d9bf0b19cca1a 0 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1fcd79ef8a319c87581f2c8081f6d5b3281d9bf0b19cca1a 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.23F 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.23F 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.23F 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3013fe676cf5a6352f0c263cb4f9fa4b3af086365642ba8b 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.fXF 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3013fe676cf5a6352f0c263cb4f9fa4b3af086365642ba8b 2 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3013fe676cf5a6352f0c263cb4f9fa4b3af086365642ba8b 2 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3013fe676cf5a6352f0c263cb4f9fa4b3af086365642ba8b 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.fXF 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.fXF 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.fXF 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=99379d9bb15954b9c22e5c75540211e1 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.HEu 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 99379d9bb15954b9c22e5c75540211e1 1 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 99379d9bb15954b9c22e5c75540211e1 1 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=99379d9bb15954b9c22e5c75540211e1 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:18:31.899 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:32.158 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.HEu 00:18:32.158 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.HEu 00:18:32.158 19:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.HEu 00:18:32.158 19:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:32.158 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:32.158 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:32.158 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:32.158 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:18:32.158 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:32.158 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:32.158 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3cc86068e4afa62873a184d19f1e19f4 00:18:32.158 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:32.158 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.FkE 00:18:32.158 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3cc86068e4afa62873a184d19f1e19f4 1 00:18:32.158 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3cc86068e4afa62873a184d19f1e19f4 1 00:18:32.158 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:32.158 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:32.158 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3cc86068e4afa62873a184d19f1e19f4 00:18:32.158 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.FkE 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.FkE 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.FkE 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f42303aad0254d033f0f0045c1304d0c8d897529c5805a0c 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.JS9 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f42303aad0254d033f0f0045c1304d0c8d897529c5805a0c 2 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f42303aad0254d033f0f0045c1304d0c8d897529c5805a0c 2 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f42303aad0254d033f0f0045c1304d0c8d897529c5805a0c 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.JS9 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.JS9 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.JS9 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=701f290778751fa02988dda80ab50fa3 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.3zj 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 701f290778751fa02988dda80ab50fa3 0 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 701f290778751fa02988dda80ab50fa3 0 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=701f290778751fa02988dda80ab50fa3 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.3zj 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.3zj 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.3zj 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:18:32.159 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:18:32.418 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:32.418 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=659512e8c9e7e6b2d6cf6ea7baafcb1f728723981e6f36d9d1b3b9424fabe205 00:18:32.418 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:32.419 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.TdF 00:18:32.419 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 659512e8c9e7e6b2d6cf6ea7baafcb1f728723981e6f36d9d1b3b9424fabe205 3 00:18:32.419 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 659512e8c9e7e6b2d6cf6ea7baafcb1f728723981e6f36d9d1b3b9424fabe205 3 00:18:32.419 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:32.419 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:32.419 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=659512e8c9e7e6b2d6cf6ea7baafcb1f728723981e6f36d9d1b3b9424fabe205 00:18:32.419 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:18:32.419 19:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:32.419 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.TdF 00:18:32.419 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.TdF 00:18:32.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.419 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.TdF 00:18:32.419 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:18:32.419 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91430 00:18:32.419 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91430 ']' 00:18:32.419 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.419 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.419 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.419 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.419 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.737 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.737 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:18:32.737 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:32.737 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fm0 00:18:32.737 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.737 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.737 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Gr5 ]] 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Gr5 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.23F 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.fXF ]] 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fXF 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.HEu 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.FkE ]] 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FkE 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.JS9 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.3zj ]] 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.3zj 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.TdF 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:32.738 19:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:33.003 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:33.003 Waiting for block devices as requested 00:18:33.261 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:33.261 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:33.827 No valid GPT data, bailing 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:33.827 No valid GPT data, bailing 00:18:33.827 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:34.084 No valid GPT data, bailing 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:34.084 No valid GPT data, bailing 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:34.084 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -a 10.0.0.1 -t tcp -s 4420 00:18:34.084 00:18:34.084 Discovery Log Number of Records 2, Generation counter 2 00:18:34.084 =====Discovery Log Entry 0====== 00:18:34.084 trtype: tcp 00:18:34.084 adrfam: ipv4 00:18:34.084 subtype: current discovery subsystem 00:18:34.084 treq: not specified, sq flow control disable supported 00:18:34.084 portid: 1 00:18:34.084 trsvcid: 4420 00:18:34.084 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:34.084 traddr: 10.0.0.1 00:18:34.084 eflags: none 00:18:34.084 sectype: none 00:18:34.084 =====Discovery Log Entry 1====== 00:18:34.084 trtype: tcp 00:18:34.084 adrfam: ipv4 00:18:34.084 subtype: nvme subsystem 00:18:34.085 treq: not specified, sq flow control disable supported 00:18:34.085 portid: 1 00:18:34.085 trsvcid: 4420 00:18:34.085 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:34.085 traddr: 10.0.0.1 00:18:34.085 eflags: none 00:18:34.085 sectype: none 00:18:34.085 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:34.085 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:18:34.085 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:34.085 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:34.085 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.085 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:34.085 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:34.085 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:34.085 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:34.085 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:34.085 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:34.085 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: ]] 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.343 19:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.343 nvme0n1 00:18:34.343 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.343 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.343 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.343 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.343 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.343 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.343 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.343 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.343 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.343 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: ]] 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.603 nvme0n1 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: ]] 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:34.603 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.604 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.604 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.862 nvme0n1 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: ]] 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:34.862 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.863 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.863 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.863 nvme0n1 00:18:34.863 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.863 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.863 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.863 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.863 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.863 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: ]] 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.121 nvme0n1 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:35.121 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.122 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:35.122 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:35.122 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:35.122 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:35.122 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:35.122 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:35.122 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:35.122 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:35.122 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:35.122 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:18:35.122 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.122 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:35.122 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:35.122 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:35.122 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.122 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:35.122 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.122 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.380 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.380 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.380 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:35.380 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:35.380 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:35.380 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.380 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.380 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:35.380 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.380 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:35.380 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:35.380 19:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:35.380 19:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:35.380 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.380 19:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.380 nvme0n1 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:35.380 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: ]] 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.639 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.897 nvme0n1 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: ]] 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.897 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.156 nvme0n1 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: ]] 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.156 nvme0n1 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.156 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: ]] 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.415 19:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.415 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.415 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:36.415 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:36.415 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:36.415 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:36.415 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.415 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.415 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:36.415 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.415 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.416 nvme0n1 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.416 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.674 nvme0n1 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:36.674 19:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:37.240 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:37.240 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: ]] 00:18:37.240 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:37.240 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:18:37.240 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.240 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:37.240 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:37.240 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:37.240 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.240 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:37.240 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.240 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.497 nvme0n1 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.497 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: ]] 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.756 nvme0n1 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.756 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.015 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.015 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.015 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.015 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.015 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.015 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.015 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:38.015 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.015 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:38.015 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:38.015 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:38.015 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:38.015 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:38.015 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:38.015 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:38.015 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:38.015 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: ]] 00:18:38.015 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.016 nvme0n1 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.016 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: ]] 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.275 19:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.275 nvme0n1 00:18:38.275 19:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.275 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.275 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.275 19:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.275 19:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:38.533 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.534 19:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.792 nvme0n1 00:18:38.792 19:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.792 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.792 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.792 19:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.792 19:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.792 19:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.792 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.792 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.792 19:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.792 19:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.792 19:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.792 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.792 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.793 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:18:38.793 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.793 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:38.793 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:38.793 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:38.793 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:38.793 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:38.793 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:38.793 19:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: ]] 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.694 19:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.952 nvme0n1 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: ]] 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.952 19:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.518 nvme0n1 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: ]] 00:18:41.518 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.519 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.775 nvme0n1 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:41.775 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:41.776 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:41.776 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:41.776 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: ]] 00:18:41.776 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:41.776 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:18:41.776 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.776 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:41.776 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:41.776 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:41.776 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.776 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:41.776 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.776 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.032 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.032 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.032 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:42.032 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:42.032 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:42.032 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.032 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.032 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:42.032 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.032 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:42.032 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:42.032 19:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:42.032 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:42.032 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.032 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.300 nvme0n1 00:18:42.300 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.300 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.300 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.300 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.300 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.300 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.300 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.300 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.300 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.300 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.300 19:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.300 19:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.300 19:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.866 nvme0n1 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: ]] 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.866 19:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.432 nvme0n1 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: ]] 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.432 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.411 nvme0n1 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: ]] 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.411 19:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.978 nvme0n1 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: ]] 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.978 19:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.546 nvme0n1 00:18:45.546 19:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.546 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.546 19:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.546 19:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.546 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.546 19:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.546 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.546 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.546 19:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.546 19:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.805 19:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.806 19:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:45.806 19:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.806 19:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:45.806 19:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:45.806 19:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:45.806 19:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:45.806 19:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.806 19:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.372 nvme0n1 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: ]] 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.372 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:46.373 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:46.373 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:46.373 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.373 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.373 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.632 nvme0n1 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: ]] 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.632 nvme0n1 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.632 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: ]] 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.890 nvme0n1 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.890 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: ]] 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.891 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.150 nvme0n1 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.150 nvme0n1 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.150 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.409 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.409 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.409 19:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.409 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.409 19:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: ]] 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:47.409 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.410 nvme0n1 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.410 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.668 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: ]] 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.669 nvme0n1 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: ]] 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.669 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.927 nvme0n1 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: ]] 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.927 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.185 nvme0n1 00:18:48.185 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.185 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.185 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.185 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.185 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.185 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.185 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.185 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.185 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.185 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.186 nvme0n1 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.186 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.444 19:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: ]] 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.444 nvme0n1 00:18:48.444 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: ]] 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.703 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.961 nvme0n1 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: ]] 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:48.961 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.962 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.962 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:48.962 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.962 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:48.962 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:48.962 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:48.962 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.962 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.962 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.220 nvme0n1 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: ]] 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.220 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.221 19:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.479 nvme0n1 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.479 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.738 nvme0n1 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: ]] 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.738 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.303 nvme0n1 00:18:50.303 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.303 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.303 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.303 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: ]] 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.304 19:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.580 nvme0n1 00:18:50.580 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.580 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.580 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.580 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.580 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.580 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.580 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.580 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.580 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.580 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.580 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.580 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: ]] 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.581 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.176 nvme0n1 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: ]] 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.176 19:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.434 nvme0n1 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.434 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.998 nvme0n1 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:51.998 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: ]] 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.999 19:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.565 nvme0n1 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: ]] 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.565 19:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.822 19:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.822 19:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.822 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.822 19:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.822 19:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.387 nvme0n1 00:18:53.387 19:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.387 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.387 19:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.387 19:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.387 19:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: ]] 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.387 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.953 nvme0n1 00:18:53.953 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.953 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.953 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.953 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.953 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.953 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.211 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.211 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.211 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.211 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: ]] 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.212 19:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.778 nvme0n1 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:54.778 19:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.779 19:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.779 19:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:54.779 19:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.779 19:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:54.779 19:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:54.779 19:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:54.779 19:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:54.779 19:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.779 19:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.345 nvme0n1 00:18:55.345 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.345 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.345 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:55.345 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.345 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.345 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: ]] 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.603 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.604 nvme0n1 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: ]] 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.604 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.863 nvme0n1 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: ]] 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.863 nvme0n1 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.863 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: ]] 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.121 nvme0n1 00:18:56.121 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.122 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.379 nvme0n1 00:18:56.379 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.379 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.379 19:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:56.379 19:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: ]] 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.379 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.636 nvme0n1 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:56.636 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: ]] 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.637 nvme0n1 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.637 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: ]] 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.901 nvme0n1 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: ]] 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:56.901 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:56.902 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:56.902 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.902 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.176 nvme0n1 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.176 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.434 nvme0n1 00:18:57.434 19:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: ]] 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.434 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.693 nvme0n1 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: ]] 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.693 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.952 nvme0n1 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: ]] 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.952 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.211 nvme0n1 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: ]] 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.211 19:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.470 nvme0n1 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.470 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.729 nvme0n1 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: ]] 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.729 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.986 nvme0n1 00:18:58.986 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.986 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.986 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.986 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.986 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.986 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: ]] 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.244 19:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.501 nvme0n1 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: ]] 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.501 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.066 nvme0n1 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: ]] 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.066 19:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.323 nvme0n1 00:19:00.323 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.323 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.323 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.323 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.323 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.323 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.323 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.323 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.324 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.581 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.581 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:00.581 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:00.581 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:00.581 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.581 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.581 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:00.581 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.581 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:00.581 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:00.581 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:00.581 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:00.581 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.581 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.838 nvme0n1 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhmOWFkN2Q3ODc0ZDJmNTQ2OTI4ODliOTQ0NDQ0Nzhi9vEP: 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: ]] 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg3N2JkNzU0NTQ4ODc0YzIxMmNjOWRlNWJlMzQ5MmRkZWE3OGNiM2JiODQxMjQ4ZDM0OWJjODhiZTVkMTliNQ1EAVo=: 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.838 19:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.403 nvme0n1 00:19:01.403 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.403 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:01.403 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:01.403 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.403 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: ]] 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.661 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.226 nvme0n1 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTkzNzlkOWJiMTU5NTRiOWMyMmU1Yzc1NTQwMjExZTFqfIVX: 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: ]] 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2NjODYwNjhlNGFmYTYyODczYTE4NGQxOWYxZTE5ZjQEi/nv: 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.226 19:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.226 19:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.226 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:02.226 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:02.227 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:02.227 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:02.227 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:02.227 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:02.227 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:02.227 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:02.227 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:02.227 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:02.227 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:02.227 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.227 19:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.227 19:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.182 nvme0n1 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQyMzAzYWFkMDI1NGQwMzNmMGYwMDQ1YzEzMDRkMGM4ZDg5NzUyOWM1ODA1YTBjPX6z3A==: 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: ]] 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzAxZjI5MDc3ODc1MWZhMDI5ODhkZGE4MGFiNTBmYTNLr/z8: 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.182 19:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.749 nvme0n1 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjU5NTEyZThjOWU3ZTZiMmQ2Y2Y2ZWE3YmFhZmNiMWY3Mjg3MjM5ODFlNmYzNmQ5ZDFiM2I5NDI0ZmFiZTIwNdbmi3A=: 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:03.749 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:03.750 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:03.750 19:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.750 19:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.750 19:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.750 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:03.750 19:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:03.750 19:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:03.750 19:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:03.750 19:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.750 19:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.750 19:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:03.750 19:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:03.750 19:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:03.750 19:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:03.750 19:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:03.750 19:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:03.750 19:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.750 19:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.316 nvme0n1 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZjZDc5ZWY4YTMxOWM4NzU4MWYyYzgwODFmNmQ1YjMyODFkOWJmMGIxOWNjYTFh4Ka5kg==: 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: ]] 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzAxM2ZlNjc2Y2Y1YTYzNTJmMGMyNjNjYjRmOWZhNGIzYWYwODYzNjU2NDJiYThibqU/ug==: 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.316 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.575 2024/07/15 19:34:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:04.575 request: 00:19:04.575 { 00:19:04.575 "method": "bdev_nvme_attach_controller", 00:19:04.575 "params": { 00:19:04.575 "name": "nvme0", 00:19:04.575 "trtype": "tcp", 00:19:04.575 "traddr": "10.0.0.1", 00:19:04.575 "adrfam": "ipv4", 00:19:04.575 "trsvcid": "4420", 00:19:04.575 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:04.575 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:04.575 "prchk_reftag": false, 00:19:04.575 "prchk_guard": false, 00:19:04.575 "hdgst": false, 00:19:04.575 "ddgst": false 00:19:04.575 } 00:19:04.575 } 00:19:04.575 Got JSON-RPC error response 00:19:04.575 GoRPCClient: error on JSON-RPC call 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.575 2024/07/15 19:34:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:04.575 request: 00:19:04.575 { 00:19:04.575 "method": "bdev_nvme_attach_controller", 00:19:04.575 "params": { 00:19:04.575 "name": "nvme0", 00:19:04.575 "trtype": "tcp", 00:19:04.575 "traddr": "10.0.0.1", 00:19:04.575 "adrfam": "ipv4", 00:19:04.575 "trsvcid": "4420", 00:19:04.575 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:04.575 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:04.575 "prchk_reftag": false, 00:19:04.575 "prchk_guard": false, 00:19:04.575 "hdgst": false, 00:19:04.575 "ddgst": false, 00:19:04.575 "dhchap_key": "key2" 00:19:04.575 } 00:19:04.575 } 00:19:04.575 Got JSON-RPC error response 00:19:04.575 GoRPCClient: error on JSON-RPC call 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.575 2024/07/15 19:34:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:04.575 request: 00:19:04.575 { 00:19:04.575 "method": "bdev_nvme_attach_controller", 00:19:04.575 "params": { 00:19:04.575 "name": "nvme0", 00:19:04.575 "trtype": "tcp", 00:19:04.575 "traddr": "10.0.0.1", 00:19:04.575 "adrfam": "ipv4", 00:19:04.575 "trsvcid": "4420", 00:19:04.575 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:04.575 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:04.575 "prchk_reftag": false, 00:19:04.575 "prchk_guard": false, 00:19:04.575 "hdgst": false, 00:19:04.575 "ddgst": false, 00:19:04.575 "dhchap_key": "key1", 00:19:04.575 "dhchap_ctrlr_key": "ckey2" 00:19:04.575 } 00:19:04.575 } 00:19:04.575 Got JSON-RPC error response 00:19:04.575 GoRPCClient: error on JSON-RPC call 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:19:04.575 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:19:04.576 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:19:04.576 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:04.576 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:19:04.576 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:04.576 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:19:04.576 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:04.576 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:04.576 rmmod nvme_tcp 00:19:04.834 rmmod nvme_fabrics 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 91430 ']' 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 91430 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 91430 ']' 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 91430 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91430 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:04.834 killing process with pid 91430 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91430' 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 91430 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 91430 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:04.834 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:19:05.092 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:05.093 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:05.093 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:05.093 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:05.093 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:19:05.093 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:19:05.093 19:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:05.659 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:05.659 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:05.917 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:05.917 19:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.fm0 /tmp/spdk.key-null.23F /tmp/spdk.key-sha256.HEu /tmp/spdk.key-sha384.JS9 /tmp/spdk.key-sha512.TdF /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:19:05.917 19:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:06.176 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:06.176 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:06.176 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:06.176 00:19:06.176 real 0m35.366s 00:19:06.176 user 0m31.429s 00:19:06.176 sys 0m3.611s 00:19:06.176 19:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:06.176 19:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.176 ************************************ 00:19:06.176 END TEST nvmf_auth_host 00:19:06.176 ************************************ 00:19:06.176 19:34:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:06.176 19:34:55 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:19:06.176 19:34:55 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:06.176 19:34:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:06.176 19:34:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:06.176 19:34:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:06.434 ************************************ 00:19:06.434 START TEST nvmf_digest 00:19:06.434 ************************************ 00:19:06.434 19:34:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:06.434 * Looking for test storage... 00:19:06.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.434 19:34:56 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:06.435 Cannot find device "nvmf_tgt_br" 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:06.435 Cannot find device "nvmf_tgt_br2" 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:06.435 Cannot find device "nvmf_tgt_br" 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:06.435 Cannot find device "nvmf_tgt_br2" 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:06.435 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:06.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:06.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:06.693 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:06.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:19:06.694 00:19:06.694 --- 10.0.0.2 ping statistics --- 00:19:06.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.694 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:06.694 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:06.694 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:19:06.694 00:19:06.694 --- 10.0.0.3 ping statistics --- 00:19:06.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.694 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:06.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:19:06.694 00:19:06.694 --- 10.0.0.1 ping statistics --- 00:19:06.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.694 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:06.694 ************************************ 00:19:06.694 START TEST nvmf_digest_clean 00:19:06.694 ************************************ 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=93014 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 93014 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93014 ']' 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:06.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:06.694 19:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:06.953 [2024-07-15 19:34:56.541611] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:19:06.953 [2024-07-15 19:34:56.541722] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.953 [2024-07-15 19:34:56.681204] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.211 [2024-07-15 19:34:56.780430] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.211 [2024-07-15 19:34:56.780523] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.211 [2024-07-15 19:34:56.780547] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.211 [2024-07-15 19:34:56.780565] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.211 [2024-07-15 19:34:56.780580] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.211 [2024-07-15 19:34:56.780632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.778 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:07.778 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:07.778 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:07.778 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:07.778 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:07.778 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.778 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:07.778 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:19:07.778 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:19:07.778 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.778 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:08.036 null0 00:19:08.036 [2024-07-15 19:34:57.605455] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.036 [2024-07-15 19:34:57.629499] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.036 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.036 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:08.036 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:08.036 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:08.036 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:08.036 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:08.036 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:08.036 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:08.036 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93064 00:19:08.036 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93064 /var/tmp/bperf.sock 00:19:08.036 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:08.036 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93064 ']' 00:19:08.036 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:08.036 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:08.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:08.036 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:08.036 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:08.036 19:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:08.036 [2024-07-15 19:34:57.691268] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:19:08.036 [2024-07-15 19:34:57.691411] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93064 ] 00:19:08.036 [2024-07-15 19:34:57.829009] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.310 [2024-07-15 19:34:57.896835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.896 19:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:08.896 19:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:08.896 19:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:08.896 19:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:09.153 19:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:09.410 19:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:09.410 19:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:09.667 nvme0n1 00:19:09.667 19:34:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:09.667 19:34:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:09.667 Running I/O for 2 seconds... 00:19:12.197 00:19:12.197 Latency(us) 00:19:12.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.197 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:12.197 nvme0n1 : 2.00 18189.21 71.05 0.00 0.00 7029.29 3261.91 11736.90 00:19:12.197 =================================================================================================================== 00:19:12.197 Total : 18189.21 71.05 0.00 0.00 7029.29 3261.91 11736.90 00:19:12.197 0 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:12.197 | select(.opcode=="crc32c") 00:19:12.197 | "\(.module_name) \(.executed)"' 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93064 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93064 ']' 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93064 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93064 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93064' 00:19:12.197 killing process with pid 93064 00:19:12.197 Received shutdown signal, test time was about 2.000000 seconds 00:19:12.197 00:19:12.197 Latency(us) 00:19:12.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.197 =================================================================================================================== 00:19:12.197 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93064 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93064 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:12.197 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:12.198 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:12.198 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:12.198 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:12.198 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:12.198 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:12.198 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93153 00:19:12.198 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:12.198 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93153 /var/tmp/bperf.sock 00:19:12.198 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93153 ']' 00:19:12.198 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:12.198 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:12.198 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:12.198 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.198 19:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:12.198 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:12.198 Zero copy mechanism will not be used. 00:19:12.198 [2024-07-15 19:35:01.960107] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:19:12.198 [2024-07-15 19:35:01.960204] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93153 ] 00:19:12.455 [2024-07-15 19:35:02.099954] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.455 [2024-07-15 19:35:02.158456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.394 19:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.394 19:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:13.394 19:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:13.394 19:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:13.394 19:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:13.650 19:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:13.650 19:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:13.907 nvme0n1 00:19:13.907 19:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:13.907 19:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:13.907 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:13.907 Zero copy mechanism will not be used. 00:19:13.907 Running I/O for 2 seconds... 00:19:16.435 00:19:16.435 Latency(us) 00:19:16.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.435 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:16.435 nvme0n1 : 2.04 7519.67 939.96 0.00 0.00 2084.10 666.53 42657.98 00:19:16.435 =================================================================================================================== 00:19:16.435 Total : 7519.67 939.96 0.00 0.00 2084.10 666.53 42657.98 00:19:16.435 0 00:19:16.435 19:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:16.435 19:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:16.435 19:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:16.435 19:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:16.435 19:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:16.435 | select(.opcode=="crc32c") 00:19:16.435 | "\(.module_name) \(.executed)"' 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93153 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93153 ']' 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93153 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93153 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:16.435 killing process with pid 93153 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93153' 00:19:16.435 Received shutdown signal, test time was about 2.000000 seconds 00:19:16.435 00:19:16.435 Latency(us) 00:19:16.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.435 =================================================================================================================== 00:19:16.435 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93153 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93153 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93245 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93245 /var/tmp/bperf.sock 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93245 ']' 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:16.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:16.435 19:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:16.694 [2024-07-15 19:35:06.264766] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:19:16.694 [2024-07-15 19:35:06.264868] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93245 ] 00:19:16.694 [2024-07-15 19:35:06.397015] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.694 [2024-07-15 19:35:06.455668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.627 19:35:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.627 19:35:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:17.627 19:35:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:17.627 19:35:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:17.627 19:35:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:17.885 19:35:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:17.885 19:35:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:18.143 nvme0n1 00:19:18.143 19:35:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:18.143 19:35:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:18.401 Running I/O for 2 seconds... 00:19:20.352 00:19:20.352 Latency(us) 00:19:20.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.352 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:20.352 nvme0n1 : 2.01 21105.04 82.44 0.00 0.00 6058.34 2487.39 10068.71 00:19:20.352 =================================================================================================================== 00:19:20.352 Total : 21105.04 82.44 0.00 0.00 6058.34 2487.39 10068.71 00:19:20.352 0 00:19:20.352 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:20.352 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:20.352 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:20.352 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:20.352 | select(.opcode=="crc32c") 00:19:20.352 | "\(.module_name) \(.executed)"' 00:19:20.352 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:20.633 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:20.633 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:20.633 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:20.633 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:20.633 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93245 00:19:20.633 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93245 ']' 00:19:20.633 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93245 00:19:20.633 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:20.633 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:20.633 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93245 00:19:20.633 killing process with pid 93245 00:19:20.633 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:20.633 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:20.633 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93245' 00:19:20.633 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93245 00:19:20.633 Received shutdown signal, test time was about 2.000000 seconds 00:19:20.633 00:19:20.633 Latency(us) 00:19:20.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.633 =================================================================================================================== 00:19:20.633 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:20.633 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93245 00:19:20.891 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:19:20.891 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:20.891 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:20.891 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:20.891 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:20.891 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:20.891 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:20.891 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93334 00:19:20.891 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:20.891 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93334 /var/tmp/bperf.sock 00:19:20.891 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93334 ']' 00:19:20.891 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:20.891 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:20.891 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:20.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:20.891 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:20.891 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:20.891 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:20.891 Zero copy mechanism will not be used. 00:19:20.891 [2024-07-15 19:35:10.561760] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:19:20.891 [2024-07-15 19:35:10.561858] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93334 ] 00:19:21.147 [2024-07-15 19:35:10.697945] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.147 [2024-07-15 19:35:10.773483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.147 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:21.147 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:21.147 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:21.147 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:21.147 19:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:21.405 19:35:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:21.405 19:35:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:21.970 nvme0n1 00:19:21.970 19:35:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:21.970 19:35:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:21.970 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:21.970 Zero copy mechanism will not be used. 00:19:21.970 Running I/O for 2 seconds... 00:19:24.502 00:19:24.502 Latency(us) 00:19:24.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.502 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:24.502 nvme0n1 : 2.00 6412.16 801.52 0.00 0.00 2489.10 1899.05 11319.85 00:19:24.502 =================================================================================================================== 00:19:24.502 Total : 6412.16 801.52 0.00 0.00 2489.10 1899.05 11319.85 00:19:24.502 0 00:19:24.502 19:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:24.503 19:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:24.503 19:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:24.503 19:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:24.503 19:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:24.503 | select(.opcode=="crc32c") 00:19:24.503 | "\(.module_name) \(.executed)"' 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93334 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93334 ']' 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93334 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93334 00:19:24.503 killing process with pid 93334 00:19:24.503 Received shutdown signal, test time was about 2.000000 seconds 00:19:24.503 00:19:24.503 Latency(us) 00:19:24.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.503 =================================================================================================================== 00:19:24.503 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93334' 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93334 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93334 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 93014 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93014 ']' 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93014 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93014 00:19:24.503 killing process with pid 93014 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93014' 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93014 00:19:24.503 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93014 00:19:24.761 ************************************ 00:19:24.761 END TEST nvmf_digest_clean 00:19:24.761 ************************************ 00:19:24.761 00:19:24.761 real 0m17.969s 00:19:24.761 user 0m34.769s 00:19:24.761 sys 0m4.337s 00:19:24.761 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:24.761 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:24.761 19:35:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:19:24.761 19:35:14 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:19:24.761 19:35:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:24.761 19:35:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:24.761 19:35:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:24.761 ************************************ 00:19:24.761 START TEST nvmf_digest_error 00:19:24.761 ************************************ 00:19:24.761 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:19:24.761 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:24.761 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.761 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:24.761 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:24.761 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=93435 00:19:24.761 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 93435 00:19:24.761 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:24.761 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93435 ']' 00:19:24.761 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.761 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:24.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.762 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.762 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:24.762 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:25.020 [2024-07-15 19:35:14.575487] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:19:25.020 [2024-07-15 19:35:14.575604] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.020 [2024-07-15 19:35:14.716519] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.020 [2024-07-15 19:35:14.775066] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.020 [2024-07-15 19:35:14.775126] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.020 [2024-07-15 19:35:14.775137] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.020 [2024-07-15 19:35:14.775146] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.020 [2024-07-15 19:35:14.775153] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.020 [2024-07-15 19:35:14.775178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.020 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.020 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:25.020 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:25.020 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:25.020 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:25.279 [2024-07-15 19:35:14.855568] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:25.279 null0 00:19:25.279 [2024-07-15 19:35:14.927345] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.279 [2024-07-15 19:35:14.951475] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93461 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93461 /var/tmp/bperf.sock 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93461 ']' 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:25.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.279 19:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:25.279 [2024-07-15 19:35:15.011802] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:19:25.279 [2024-07-15 19:35:15.011900] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93461 ] 00:19:25.550 [2024-07-15 19:35:15.149682] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.550 [2024-07-15 19:35:15.219741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.550 19:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.550 19:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:25.550 19:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:25.550 19:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:25.810 19:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:25.810 19:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.810 19:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:25.810 19:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.810 19:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:25.810 19:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:26.379 nvme0n1 00:19:26.379 19:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:26.379 19:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.379 19:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:26.379 19:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.379 19:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:26.379 19:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:26.379 Running I/O for 2 seconds... 00:19:26.379 [2024-07-15 19:35:16.085314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.379 [2024-07-15 19:35:16.085418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.379 [2024-07-15 19:35:16.085438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.379 [2024-07-15 19:35:16.100113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.379 [2024-07-15 19:35:16.100183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.379 [2024-07-15 19:35:16.100199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.379 [2024-07-15 19:35:16.115346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.379 [2024-07-15 19:35:16.115411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.379 [2024-07-15 19:35:16.115426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.379 [2024-07-15 19:35:16.128786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.379 [2024-07-15 19:35:16.128827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.379 [2024-07-15 19:35:16.128841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.379 [2024-07-15 19:35:16.142800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.379 [2024-07-15 19:35:16.142840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.379 [2024-07-15 19:35:16.142854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.379 [2024-07-15 19:35:16.160152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.379 [2024-07-15 19:35:16.160208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.379 [2024-07-15 19:35:16.160223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.379 [2024-07-15 19:35:16.173665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.379 [2024-07-15 19:35:16.173741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.379 [2024-07-15 19:35:16.173756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.636 [2024-07-15 19:35:16.188480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.636 [2024-07-15 19:35:16.188534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.636 [2024-07-15 19:35:16.188550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.636 [2024-07-15 19:35:16.201901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.636 [2024-07-15 19:35:16.201954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.636 [2024-07-15 19:35:16.201967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.636 [2024-07-15 19:35:16.214250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.636 [2024-07-15 19:35:16.214288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.636 [2024-07-15 19:35:16.214302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.636 [2024-07-15 19:35:16.230916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.636 [2024-07-15 19:35:16.230956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.636 [2024-07-15 19:35:16.230971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.637 [2024-07-15 19:35:16.245356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.637 [2024-07-15 19:35:16.245409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.637 [2024-07-15 19:35:16.245425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.637 [2024-07-15 19:35:16.260402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.637 [2024-07-15 19:35:16.260495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.637 [2024-07-15 19:35:16.260509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.637 [2024-07-15 19:35:16.273873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.637 [2024-07-15 19:35:16.273950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.637 [2024-07-15 19:35:16.273965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.637 [2024-07-15 19:35:16.288286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.637 [2024-07-15 19:35:16.288368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.637 [2024-07-15 19:35:16.288384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.637 [2024-07-15 19:35:16.302639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.637 [2024-07-15 19:35:16.302704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.637 [2024-07-15 19:35:16.302735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.637 [2024-07-15 19:35:16.314655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.637 [2024-07-15 19:35:16.314739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.637 [2024-07-15 19:35:16.314772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.637 [2024-07-15 19:35:16.329000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.637 [2024-07-15 19:35:16.329105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.637 [2024-07-15 19:35:16.329121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.637 [2024-07-15 19:35:16.343734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.637 [2024-07-15 19:35:16.343814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.637 [2024-07-15 19:35:16.343830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.637 [2024-07-15 19:35:16.358944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.637 [2024-07-15 19:35:16.359010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.637 [2024-07-15 19:35:16.359026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.637 [2024-07-15 19:35:16.371910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.637 [2024-07-15 19:35:16.371977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.637 [2024-07-15 19:35:16.371992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.637 [2024-07-15 19:35:16.388600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.637 [2024-07-15 19:35:16.388684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.637 [2024-07-15 19:35:16.388716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.637 [2024-07-15 19:35:16.403188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.637 [2024-07-15 19:35:16.403291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.637 [2024-07-15 19:35:16.403307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.637 [2024-07-15 19:35:16.417521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.637 [2024-07-15 19:35:16.417603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.637 [2024-07-15 19:35:16.417634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.637 [2024-07-15 19:35:16.431977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.637 [2024-07-15 19:35:16.432030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.637 [2024-07-15 19:35:16.432061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.894 [2024-07-15 19:35:16.446569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.894 [2024-07-15 19:35:16.446646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.894 [2024-07-15 19:35:16.446662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.894 [2024-07-15 19:35:16.460644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.894 [2024-07-15 19:35:16.460717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.894 [2024-07-15 19:35:16.460748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.894 [2024-07-15 19:35:16.476621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.894 [2024-07-15 19:35:16.476692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.894 [2024-07-15 19:35:16.476709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.894 [2024-07-15 19:35:16.489086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.894 [2024-07-15 19:35:16.489157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.894 [2024-07-15 19:35:16.489173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.894 [2024-07-15 19:35:16.503552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.894 [2024-07-15 19:35:16.503624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.894 [2024-07-15 19:35:16.503655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.894 [2024-07-15 19:35:16.517721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.894 [2024-07-15 19:35:16.517803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.894 [2024-07-15 19:35:16.517833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.894 [2024-07-15 19:35:16.531009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.894 [2024-07-15 19:35:16.531090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.894 [2024-07-15 19:35:16.531120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.894 [2024-07-15 19:35:16.546275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.894 [2024-07-15 19:35:16.546326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.894 [2024-07-15 19:35:16.546342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.894 [2024-07-15 19:35:16.562893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.894 [2024-07-15 19:35:16.562976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.894 [2024-07-15 19:35:16.563009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.894 [2024-07-15 19:35:16.578559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.894 [2024-07-15 19:35:16.578610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.894 [2024-07-15 19:35:16.578624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.894 [2024-07-15 19:35:16.590684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.894 [2024-07-15 19:35:16.590803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.894 [2024-07-15 19:35:16.590820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.894 [2024-07-15 19:35:16.607268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.894 [2024-07-15 19:35:16.607334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.894 [2024-07-15 19:35:16.607373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.894 [2024-07-15 19:35:16.621992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.894 [2024-07-15 19:35:16.622040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.894 [2024-07-15 19:35:16.622055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.895 [2024-07-15 19:35:16.637291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.895 [2024-07-15 19:35:16.637341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.895 [2024-07-15 19:35:16.637370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.895 [2024-07-15 19:35:16.649177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.895 [2024-07-15 19:35:16.649218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.895 [2024-07-15 19:35:16.649232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.895 [2024-07-15 19:35:16.664088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.895 [2024-07-15 19:35:16.664132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.895 [2024-07-15 19:35:16.664147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.895 [2024-07-15 19:35:16.678970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.895 [2024-07-15 19:35:16.679038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.895 [2024-07-15 19:35:16.679053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.895 [2024-07-15 19:35:16.692127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:26.895 [2024-07-15 19:35:16.692170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.895 [2024-07-15 19:35:16.692185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.153 [2024-07-15 19:35:16.705102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.153 [2024-07-15 19:35:16.705178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.153 [2024-07-15 19:35:16.705193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.153 [2024-07-15 19:35:16.720202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.153 [2024-07-15 19:35:16.720290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.153 [2024-07-15 19:35:16.720304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.153 [2024-07-15 19:35:16.734892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.153 [2024-07-15 19:35:16.734946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.153 [2024-07-15 19:35:16.734974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.153 [2024-07-15 19:35:16.749397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.153 [2024-07-15 19:35:16.749451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.153 [2024-07-15 19:35:16.749480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.153 [2024-07-15 19:35:16.763978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.153 [2024-07-15 19:35:16.764019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.153 [2024-07-15 19:35:16.764033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.153 [2024-07-15 19:35:16.776172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.153 [2024-07-15 19:35:16.776212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.153 [2024-07-15 19:35:16.776226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.153 [2024-07-15 19:35:16.792749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.153 [2024-07-15 19:35:16.792789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.153 [2024-07-15 19:35:16.792804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.153 [2024-07-15 19:35:16.804735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.153 [2024-07-15 19:35:16.804789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.153 [2024-07-15 19:35:16.804819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.153 [2024-07-15 19:35:16.820411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.153 [2024-07-15 19:35:16.820453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.153 [2024-07-15 19:35:16.820469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.153 [2024-07-15 19:35:16.835172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.153 [2024-07-15 19:35:16.835230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.153 [2024-07-15 19:35:16.835246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.153 [2024-07-15 19:35:16.849540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.153 [2024-07-15 19:35:16.849620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.153 [2024-07-15 19:35:16.849650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.153 [2024-07-15 19:35:16.864322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.153 [2024-07-15 19:35:16.864416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.153 [2024-07-15 19:35:16.864436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.153 [2024-07-15 19:35:16.879814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.153 [2024-07-15 19:35:16.879875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.153 [2024-07-15 19:35:16.879891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.153 [2024-07-15 19:35:16.893352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.153 [2024-07-15 19:35:16.893437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.153 [2024-07-15 19:35:16.893469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.153 [2024-07-15 19:35:16.908001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.153 [2024-07-15 19:35:16.908100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.153 [2024-07-15 19:35:16.908116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.153 [2024-07-15 19:35:16.923133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.153 [2024-07-15 19:35:16.923213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.153 [2024-07-15 19:35:16.923244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.153 [2024-07-15 19:35:16.937462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.153 [2024-07-15 19:35:16.937533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.153 [2024-07-15 19:35:16.937564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.153 [2024-07-15 19:35:16.950199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.153 [2024-07-15 19:35:16.950298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.153 [2024-07-15 19:35:16.950316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.412 [2024-07-15 19:35:16.964584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.412 [2024-07-15 19:35:16.964658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.412 [2024-07-15 19:35:16.964676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.412 [2024-07-15 19:35:16.978704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.412 [2024-07-15 19:35:16.978791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.412 [2024-07-15 19:35:16.978808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.412 [2024-07-15 19:35:16.993338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.412 [2024-07-15 19:35:16.993418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.412 [2024-07-15 19:35:16.993436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.412 [2024-07-15 19:35:17.007306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.412 [2024-07-15 19:35:17.007350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.412 [2024-07-15 19:35:17.007393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.412 [2024-07-15 19:35:17.022683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.412 [2024-07-15 19:35:17.022766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.412 [2024-07-15 19:35:17.022797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.412 [2024-07-15 19:35:17.033965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.412 [2024-07-15 19:35:17.034006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.412 [2024-07-15 19:35:17.034036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.412 [2024-07-15 19:35:17.049885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.412 [2024-07-15 19:35:17.049954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.412 [2024-07-15 19:35:17.049971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.412 [2024-07-15 19:35:17.064123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.412 [2024-07-15 19:35:17.064177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.412 [2024-07-15 19:35:17.064193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.412 [2024-07-15 19:35:17.078289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.412 [2024-07-15 19:35:17.078370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.412 [2024-07-15 19:35:17.078389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.412 [2024-07-15 19:35:17.091379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.412 [2024-07-15 19:35:17.091464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.412 [2024-07-15 19:35:17.091495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.412 [2024-07-15 19:35:17.105870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.412 [2024-07-15 19:35:17.105910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.412 [2024-07-15 19:35:17.105925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.412 [2024-07-15 19:35:17.122652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.412 [2024-07-15 19:35:17.122694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.412 [2024-07-15 19:35:17.122709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.412 [2024-07-15 19:35:17.135909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.412 [2024-07-15 19:35:17.135980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.412 [2024-07-15 19:35:17.135997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.412 [2024-07-15 19:35:17.148670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.412 [2024-07-15 19:35:17.148748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.412 [2024-07-15 19:35:17.148795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.412 [2024-07-15 19:35:17.161303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.412 [2024-07-15 19:35:17.161385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.412 [2024-07-15 19:35:17.161400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.412 [2024-07-15 19:35:17.174781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.412 [2024-07-15 19:35:17.174833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.412 [2024-07-15 19:35:17.174863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.412 [2024-07-15 19:35:17.187869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.412 [2024-07-15 19:35:17.187924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.412 [2024-07-15 19:35:17.187954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.412 [2024-07-15 19:35:17.201852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.412 [2024-07-15 19:35:17.201906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.412 [2024-07-15 19:35:17.201936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.670 [2024-07-15 19:35:17.216153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.670 [2024-07-15 19:35:17.216209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.670 [2024-07-15 19:35:17.216239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.670 [2024-07-15 19:35:17.230357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.670 [2024-07-15 19:35:17.230408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.670 [2024-07-15 19:35:17.230423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.670 [2024-07-15 19:35:17.243625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.670 [2024-07-15 19:35:17.243667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.670 [2024-07-15 19:35:17.243682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.670 [2024-07-15 19:35:17.255159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.670 [2024-07-15 19:35:17.255226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.670 [2024-07-15 19:35:17.255241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.670 [2024-07-15 19:35:17.272836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.670 [2024-07-15 19:35:17.272916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.670 [2024-07-15 19:35:17.272948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.670 [2024-07-15 19:35:17.284672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.670 [2024-07-15 19:35:17.284714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.670 [2024-07-15 19:35:17.284745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.670 [2024-07-15 19:35:17.299376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.670 [2024-07-15 19:35:17.299444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.670 [2024-07-15 19:35:17.299460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.670 [2024-07-15 19:35:17.314422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.670 [2024-07-15 19:35:17.314494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.670 [2024-07-15 19:35:17.314511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.670 [2024-07-15 19:35:17.329008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.670 [2024-07-15 19:35:17.329094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.670 [2024-07-15 19:35:17.329110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.670 [2024-07-15 19:35:17.342263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.670 [2024-07-15 19:35:17.342336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.670 [2024-07-15 19:35:17.342353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.670 [2024-07-15 19:35:17.355535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.670 [2024-07-15 19:35:17.355615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.670 [2024-07-15 19:35:17.355631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.670 [2024-07-15 19:35:17.368683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.670 [2024-07-15 19:35:17.368736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.670 [2024-07-15 19:35:17.368752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.670 [2024-07-15 19:35:17.384517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.670 [2024-07-15 19:35:17.384611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.670 [2024-07-15 19:35:17.384630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.670 [2024-07-15 19:35:17.398766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.670 [2024-07-15 19:35:17.398838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.670 [2024-07-15 19:35:17.398854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.670 [2024-07-15 19:35:17.413619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.670 [2024-07-15 19:35:17.413701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.670 [2024-07-15 19:35:17.413718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.670 [2024-07-15 19:35:17.427035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.670 [2024-07-15 19:35:17.427116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.671 [2024-07-15 19:35:17.427132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.671 [2024-07-15 19:35:17.441879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.671 [2024-07-15 19:35:17.441934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.671 [2024-07-15 19:35:17.441950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.671 [2024-07-15 19:35:17.454457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.671 [2024-07-15 19:35:17.454530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.671 [2024-07-15 19:35:17.454546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.671 [2024-07-15 19:35:17.467437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.671 [2024-07-15 19:35:17.467515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.671 [2024-07-15 19:35:17.467531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.929 [2024-07-15 19:35:17.482822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.929 [2024-07-15 19:35:17.482896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.929 [2024-07-15 19:35:17.482912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.929 [2024-07-15 19:35:17.497430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.929 [2024-07-15 19:35:17.497525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.929 [2024-07-15 19:35:17.497540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.929 [2024-07-15 19:35:17.509673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.929 [2024-07-15 19:35:17.509714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.929 [2024-07-15 19:35:17.509745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.929 [2024-07-15 19:35:17.523625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.929 [2024-07-15 19:35:17.523681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.929 [2024-07-15 19:35:17.523696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.929 [2024-07-15 19:35:17.538882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.929 [2024-07-15 19:35:17.538924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.929 [2024-07-15 19:35:17.538938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.929 [2024-07-15 19:35:17.554685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.929 [2024-07-15 19:35:17.554728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.929 [2024-07-15 19:35:17.554742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.929 [2024-07-15 19:35:17.568972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.929 [2024-07-15 19:35:17.569013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.929 [2024-07-15 19:35:17.569027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.929 [2024-07-15 19:35:17.581160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.929 [2024-07-15 19:35:17.581200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.929 [2024-07-15 19:35:17.581214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.929 [2024-07-15 19:35:17.594494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.930 [2024-07-15 19:35:17.594536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.930 [2024-07-15 19:35:17.594551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.930 [2024-07-15 19:35:17.609075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.930 [2024-07-15 19:35:17.609130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.930 [2024-07-15 19:35:17.609160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.930 [2024-07-15 19:35:17.621800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.930 [2024-07-15 19:35:17.621855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.930 [2024-07-15 19:35:17.621885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.930 [2024-07-15 19:35:17.636888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.930 [2024-07-15 19:35:17.636945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.930 [2024-07-15 19:35:17.636976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.930 [2024-07-15 19:35:17.651476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.930 [2024-07-15 19:35:17.651547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.930 [2024-07-15 19:35:17.651577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.930 [2024-07-15 19:35:17.666493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.930 [2024-07-15 19:35:17.666567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.930 [2024-07-15 19:35:17.666583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.930 [2024-07-15 19:35:17.681092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.930 [2024-07-15 19:35:17.681150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.930 [2024-07-15 19:35:17.681165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.930 [2024-07-15 19:35:17.692391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.930 [2024-07-15 19:35:17.692489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.930 [2024-07-15 19:35:17.692522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.930 [2024-07-15 19:35:17.709291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.930 [2024-07-15 19:35:17.709369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.930 [2024-07-15 19:35:17.709387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.930 [2024-07-15 19:35:17.722561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:27.930 [2024-07-15 19:35:17.722629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.930 [2024-07-15 19:35:17.722646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.189 [2024-07-15 19:35:17.736707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.189 [2024-07-15 19:35:17.736765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.189 [2024-07-15 19:35:17.736781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.189 [2024-07-15 19:35:17.751219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.189 [2024-07-15 19:35:17.751283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.189 [2024-07-15 19:35:17.751300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.189 [2024-07-15 19:35:17.766439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.189 [2024-07-15 19:35:17.766512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.189 [2024-07-15 19:35:17.766528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.189 [2024-07-15 19:35:17.779705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.189 [2024-07-15 19:35:17.779814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.189 [2024-07-15 19:35:17.779830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.189 [2024-07-15 19:35:17.794610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.189 [2024-07-15 19:35:17.794707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.189 [2024-07-15 19:35:17.794724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.189 [2024-07-15 19:35:17.811118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.189 [2024-07-15 19:35:17.811202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.189 [2024-07-15 19:35:17.811218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.189 [2024-07-15 19:35:17.822815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.189 [2024-07-15 19:35:17.822899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.189 [2024-07-15 19:35:17.822929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.189 [2024-07-15 19:35:17.837614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.189 [2024-07-15 19:35:17.837680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.189 [2024-07-15 19:35:17.837697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.189 [2024-07-15 19:35:17.852867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.189 [2024-07-15 19:35:17.852926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.189 [2024-07-15 19:35:17.852943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.189 [2024-07-15 19:35:17.868279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.189 [2024-07-15 19:35:17.868348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.189 [2024-07-15 19:35:17.868382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.189 [2024-07-15 19:35:17.882881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.189 [2024-07-15 19:35:17.882965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.189 [2024-07-15 19:35:17.882982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.189 [2024-07-15 19:35:17.895199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.189 [2024-07-15 19:35:17.895267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.189 [2024-07-15 19:35:17.895283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.189 [2024-07-15 19:35:17.910058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.189 [2024-07-15 19:35:17.910118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.189 [2024-07-15 19:35:17.910134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.189 [2024-07-15 19:35:17.924145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.189 [2024-07-15 19:35:17.924216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.189 [2024-07-15 19:35:17.924232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.189 [2024-07-15 19:35:17.936941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.189 [2024-07-15 19:35:17.937002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.189 [2024-07-15 19:35:17.937018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.189 [2024-07-15 19:35:17.951067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.189 [2024-07-15 19:35:17.951107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.190 [2024-07-15 19:35:17.951122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.190 [2024-07-15 19:35:17.965652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.190 [2024-07-15 19:35:17.965728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.190 [2024-07-15 19:35:17.965743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.190 [2024-07-15 19:35:17.978697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.190 [2024-07-15 19:35:17.978751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.190 [2024-07-15 19:35:17.978765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.448 [2024-07-15 19:35:17.992921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.448 [2024-07-15 19:35:17.992965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.448 [2024-07-15 19:35:17.992980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.448 [2024-07-15 19:35:18.009726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.448 [2024-07-15 19:35:18.009791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.448 [2024-07-15 19:35:18.009807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.448 [2024-07-15 19:35:18.021859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.448 [2024-07-15 19:35:18.021902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.448 [2024-07-15 19:35:18.021916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.448 [2024-07-15 19:35:18.035173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.448 [2024-07-15 19:35:18.035217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.448 [2024-07-15 19:35:18.035232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.448 [2024-07-15 19:35:18.047744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.448 [2024-07-15 19:35:18.047784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.448 [2024-07-15 19:35:18.047798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.448 [2024-07-15 19:35:18.062899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x165ae10) 00:19:28.448 [2024-07-15 19:35:18.062955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.448 [2024-07-15 19:35:18.062985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.448 00:19:28.448 Latency(us) 00:19:28.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.448 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:28.448 nvme0n1 : 2.01 17868.51 69.80 0.00 0.00 7155.51 3634.27 19541.64 00:19:28.448 =================================================================================================================== 00:19:28.448 Total : 17868.51 69.80 0.00 0.00 7155.51 3634.27 19541.64 00:19:28.448 0 00:19:28.448 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:28.448 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:28.448 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:28.448 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:28.448 | .driver_specific 00:19:28.448 | .nvme_error 00:19:28.448 | .status_code 00:19:28.448 | .command_transient_transport_error' 00:19:28.706 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 140 > 0 )) 00:19:28.706 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93461 00:19:28.706 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93461 ']' 00:19:28.706 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93461 00:19:28.706 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:28.706 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:28.706 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93461 00:19:28.706 killing process with pid 93461 00:19:28.706 Received shutdown signal, test time was about 2.000000 seconds 00:19:28.706 00:19:28.706 Latency(us) 00:19:28.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.706 =================================================================================================================== 00:19:28.706 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:28.706 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:28.706 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:28.706 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93461' 00:19:28.706 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93461 00:19:28.706 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93461 00:19:28.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:28.965 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:19:28.965 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:28.965 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:28.965 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:28.965 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:28.965 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:19:28.965 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93538 00:19:28.965 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93538 /var/tmp/bperf.sock 00:19:28.965 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93538 ']' 00:19:28.965 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:28.965 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:28.965 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:28.965 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:28.965 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:28.965 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:28.965 Zero copy mechanism will not be used. 00:19:28.965 [2024-07-15 19:35:18.592477] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:19:28.965 [2024-07-15 19:35:18.592572] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93538 ] 00:19:28.965 [2024-07-15 19:35:18.725724] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.223 [2024-07-15 19:35:18.786192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.223 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:29.223 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:29.223 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:29.223 19:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:29.482 19:35:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:29.482 19:35:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.482 19:35:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:29.482 19:35:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.482 19:35:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:29.482 19:35:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:29.740 nvme0n1 00:19:29.740 19:35:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:29.740 19:35:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.740 19:35:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:29.740 19:35:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.740 19:35:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:29.740 19:35:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:29.999 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:29.999 Zero copy mechanism will not be used. 00:19:29.999 Running I/O for 2 seconds... 00:19:29.999 [2024-07-15 19:35:19.589391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.589474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.589491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.594053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.594124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.594156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.598699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.598748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.598762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.603448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.603493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.603507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.606391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.606440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.606454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.612074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.612158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.612174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.616825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.616884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.616915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.619839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.619893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.619923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.624947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.624988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.625002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.629933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.629999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.630014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.633672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.633742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.633757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.637330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.637406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.637421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.641956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.642026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.642041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.646362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.646426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.646441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.651139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.651203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.651218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.655562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.655622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.655638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.659444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.659499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.659514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.663778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.663832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.663847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.668419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.668477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.668492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.672387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.672444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.999 [2024-07-15 19:35:19.672486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.999 [2024-07-15 19:35:19.676901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:29.999 [2024-07-15 19:35:19.676974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.676989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.681709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.681785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.681800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.684701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.684749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.684764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.688857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.688915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.688929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.693122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.693180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.693194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.697972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.698013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.698027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.701479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.701532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.701562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.705578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.705633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.705663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.709500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.709541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.709554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.713992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.714064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.714078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.718047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.718102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.718131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.722397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.722446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.722460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.726982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.727040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.727054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.731152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.731202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.731216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.735697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.735738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.735752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.739738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.739794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.739807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.743285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.743325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.743339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.747917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.747972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.748002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.751988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.752031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.752045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.756283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.756341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.756373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.760548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.760632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.760647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.765122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.765180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.765195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.769250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.769290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.769304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.773705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.773746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.773759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.777389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.777429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.777442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.782560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.782601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.782616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.785970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.786009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.786023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.790267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.790308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.790321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.000 [2024-07-15 19:35:19.793335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.000 [2024-07-15 19:35:19.793386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.000 [2024-07-15 19:35:19.793400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.001 [2024-07-15 19:35:19.797829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.001 [2024-07-15 19:35:19.797870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.001 [2024-07-15 19:35:19.797884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.802768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.802808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.802822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.806416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.806458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.806472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.810968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.811011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.811025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.815809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.815850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.815864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.821093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.821135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.821149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.824646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.824686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.824700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.829338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.829391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.829405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.833771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.833811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.833825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.837280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.837320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.837333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.841485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.841524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.841538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.845529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.845568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.845582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.850378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.850417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.850431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.854553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.854628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.854642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.859068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.859124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.859138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.863148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.863219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.863234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.867687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.867728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.867741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.870898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.870967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.870996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.875815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.875867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.875896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.881190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.881233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.881247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.885527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.885597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.885609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.889066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.889149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.889163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.893247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.893287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.261 [2024-07-15 19:35:19.893301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.261 [2024-07-15 19:35:19.897511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.261 [2024-07-15 19:35:19.897551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.897564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.901984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.902038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.902069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.905777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.905826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.905841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.910601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.910646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.910660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.915053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.915109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.915136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.918844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.918897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.918926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.923658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.923713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.923726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.928654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.928707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.928738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.931936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.931986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.932015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.936274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.936329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.936359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.941685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.941740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.941770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.946648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.946702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.946731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.950306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.950346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.950374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.954919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.954988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.955001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.960039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.960094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.960124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.965490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.965596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.965610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.968996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.969067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.969081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.973625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.973679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.973709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.978420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.978462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.978476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.982031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.982099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.982113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.986711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.986759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.986778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.992014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.992070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.992085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.994850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.994894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.994908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:19.999657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:19.999708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:19.999723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:20.004625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:20.004691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:20.004712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:20.009095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:20.009147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:20.009161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:20.013251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:20.013294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:20.013308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:20.018674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:20.018720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.262 [2024-07-15 19:35:20.018734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.262 [2024-07-15 19:35:20.024060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.262 [2024-07-15 19:35:20.024251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.263 [2024-07-15 19:35:20.024394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.263 [2024-07-15 19:35:20.027762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.263 [2024-07-15 19:35:20.027949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.263 [2024-07-15 19:35:20.027980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.263 [2024-07-15 19:35:20.033608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.263 [2024-07-15 19:35:20.033840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.263 [2024-07-15 19:35:20.033929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.263 [2024-07-15 19:35:20.038523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.263 [2024-07-15 19:35:20.038713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.263 [2024-07-15 19:35:20.038799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.263 [2024-07-15 19:35:20.042848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.263 [2024-07-15 19:35:20.043035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.263 [2024-07-15 19:35:20.043127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.263 [2024-07-15 19:35:20.047818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.263 [2024-07-15 19:35:20.048007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.263 [2024-07-15 19:35:20.048105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.263 [2024-07-15 19:35:20.051799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.263 [2024-07-15 19:35:20.051940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.263 [2024-07-15 19:35:20.052024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.263 [2024-07-15 19:35:20.056689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.263 [2024-07-15 19:35:20.056833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.263 [2024-07-15 19:35:20.056918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.263 [2024-07-15 19:35:20.060879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.263 [2024-07-15 19:35:20.061051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.263 [2024-07-15 19:35:20.061167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.065675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.065850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.065931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.069802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.069941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.070023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.074806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.074934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.075016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.078417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.078560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.078651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.083001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.083056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.083072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.087585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.087635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.087649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.091474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.091524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.091539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.095821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.095887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.095901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.099652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.099705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.099719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.105239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.105300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.105314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.110867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.110945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.110959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.114473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.114535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.114549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.119043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.119103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.119118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.123659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.123716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.123732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.128418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.128477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.128491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.132627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.132695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.132709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.137065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.137110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.137124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.141486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.141523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.141537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.145393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.145428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.145441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.150289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.150327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.150340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.154684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.154734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.154747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.159723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.159779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.159807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.163743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.163791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.163805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.167240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.167287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.167301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.171944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.171986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.538 [2024-07-15 19:35:20.171999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.538 [2024-07-15 19:35:20.176638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.538 [2024-07-15 19:35:20.176688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.176701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.180358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.180433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.180446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.184554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.184603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.184616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.188949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.188997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.189010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.192121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.192170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.192182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.196969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.197039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.197069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.202237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.202281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.202295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.206703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.206751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.206763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.209745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.209795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.209807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.214244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.214280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.214293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.218115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.218164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.218176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.221918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.221970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.221983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.225895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.225933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.225947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.231341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.231387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.231400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.236311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.236347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.236375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.240192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.240228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.240240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.245503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.245539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.245552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.249492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.249542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.249555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.253387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.253438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.253452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.258253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.258292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.258306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.263431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.263465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.263478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.267789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.267824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.267837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.272400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.272457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.272470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.277262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.277299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.277312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.282471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.282508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.282521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.285946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.285981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.285993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.290581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.290631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.290660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.295754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.295791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.295804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.300432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.300468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.300481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.304465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.304502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.304515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.308065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.308101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.308114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.539 [2024-07-15 19:35:20.312418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.539 [2024-07-15 19:35:20.312463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.539 [2024-07-15 19:35:20.312477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.540 [2024-07-15 19:35:20.316964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.540 [2024-07-15 19:35:20.317013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.540 [2024-07-15 19:35:20.317026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.540 [2024-07-15 19:35:20.321329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.540 [2024-07-15 19:35:20.321407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.540 [2024-07-15 19:35:20.321421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.540 [2024-07-15 19:35:20.325575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.540 [2024-07-15 19:35:20.325611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.540 [2024-07-15 19:35:20.325625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.540 [2024-07-15 19:35:20.329510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.540 [2024-07-15 19:35:20.329546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.540 [2024-07-15 19:35:20.329558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.540 [2024-07-15 19:35:20.333592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.540 [2024-07-15 19:35:20.333627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.540 [2024-07-15 19:35:20.333641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.806 [2024-07-15 19:35:20.338169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.806 [2024-07-15 19:35:20.338226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.806 [2024-07-15 19:35:20.338255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.806 [2024-07-15 19:35:20.342155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.806 [2024-07-15 19:35:20.342205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.806 [2024-07-15 19:35:20.342243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.806 [2024-07-15 19:35:20.346535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.806 [2024-07-15 19:35:20.346571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.806 [2024-07-15 19:35:20.346584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.806 [2024-07-15 19:35:20.350660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.806 [2024-07-15 19:35:20.350695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.806 [2024-07-15 19:35:20.350725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.806 [2024-07-15 19:35:20.355093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.806 [2024-07-15 19:35:20.355160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.806 [2024-07-15 19:35:20.355173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.806 [2024-07-15 19:35:20.359632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.806 [2024-07-15 19:35:20.359683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.806 [2024-07-15 19:35:20.359695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.806 [2024-07-15 19:35:20.364187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.806 [2024-07-15 19:35:20.364239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.806 [2024-07-15 19:35:20.364252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.806 [2024-07-15 19:35:20.368591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.806 [2024-07-15 19:35:20.368628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.806 [2024-07-15 19:35:20.368641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.373016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.373067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.373081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.376103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.376153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.376165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.380492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.380526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.380539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.384924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.384973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.384985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.388389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.388454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.388468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.392797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.392846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.392859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.396649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.396699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.396711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.401209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.401261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.401275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.405844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.405893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.405905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.410526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.410592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.410605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.413266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.413312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.413324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.418592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.418643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.418656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.422542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.422608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.422622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.425810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.425844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.425858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.429873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.429924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.429937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.434285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.434322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.434335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.438888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.438956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.438968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.442621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.442670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.442682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.447172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.447209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.447222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.451415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.451506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.451519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.455315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.455370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.455385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.459723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.459772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.459785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.464723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.464774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.464787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.469909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.469958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.469971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.473468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.473517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.473530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.477637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.477682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.477696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.482493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.482546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.482560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.807 [2024-07-15 19:35:20.485739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.807 [2024-07-15 19:35:20.485794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.807 [2024-07-15 19:35:20.485807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.490316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.490370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.490385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.494939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.494975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.494988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.498520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.498558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.498572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.502783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.502832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.502845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.507741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.507794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.507808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.511430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.511481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.511495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.515584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.515648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.515661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.520800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.520850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.520863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.525435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.525484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.525497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.528173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.528206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.528218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.533420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.533469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.533482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.537026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.537062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.537075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.540329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.540392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.540406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.544840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.544896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.544908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.550063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.550136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.550150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.554504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.554564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.554578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.558455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.558513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.558528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.562974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.563061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.563074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.567328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.567426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.567440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.572117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.572198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.572212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.808 [2024-07-15 19:35:20.576640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.808 [2024-07-15 19:35:20.576682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.808 [2024-07-15 19:35:20.576696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.809 [2024-07-15 19:35:20.580350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.809 [2024-07-15 19:35:20.580432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.809 [2024-07-15 19:35:20.580446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.809 [2024-07-15 19:35:20.584857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.809 [2024-07-15 19:35:20.584932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.809 [2024-07-15 19:35:20.584946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.809 [2024-07-15 19:35:20.589451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.809 [2024-07-15 19:35:20.589484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.809 [2024-07-15 19:35:20.589497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.809 [2024-07-15 19:35:20.594065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.809 [2024-07-15 19:35:20.594104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.809 [2024-07-15 19:35:20.594117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.809 [2024-07-15 19:35:20.597757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.809 [2024-07-15 19:35:20.597798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.809 [2024-07-15 19:35:20.597812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.809 [2024-07-15 19:35:20.602396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.809 [2024-07-15 19:35:20.602439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.809 [2024-07-15 19:35:20.602452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.809 [2024-07-15 19:35:20.607934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:30.809 [2024-07-15 19:35:20.607998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.809 [2024-07-15 19:35:20.608013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.070 [2024-07-15 19:35:20.612772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.612840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.612854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.617464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.617517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.617531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.620930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.620972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.620985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.625483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.625537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.625551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.630977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.631047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.631060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.635466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.635514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.635528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.639314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.639374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.639389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.643808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.643886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.643900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.648851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.648905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.648919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.653146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.653195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.653210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.657672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.657737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.657753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.662110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.662175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.662204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.666667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.666718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.666731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.670288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.670323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.670336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.674560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.674639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.674666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.678244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.678281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.678294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.683055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.683105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.683117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.687740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.687776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.687789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.691605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.691641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.691654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.695610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.695661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.695674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.699465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.699513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.699526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.703809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.703858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.703870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.708080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.708126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.708138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.711287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.711335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.711347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.715430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.715478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.715490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.719574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.719609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.719621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.723561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.723595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.723606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.727671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.727725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.727737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.732051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.732108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.071 [2024-07-15 19:35:20.732121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.071 [2024-07-15 19:35:20.736816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.071 [2024-07-15 19:35:20.736865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.736877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.740355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.740412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.740425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.744459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.744491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.744519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.747848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.747896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.747908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.751894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.751941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.751953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.757455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.757503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.757531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.760935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.760982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.760994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.764971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.765019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.765031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.769457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.769504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.769515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.772899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.772948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.772959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.776910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.776965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.776977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.781224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.781283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.781296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.785276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.785325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.785336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.789771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.789819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.789830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.794045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.794094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.794107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.797338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.797395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.797407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.801937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.801985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.801997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.805878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.805927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.805939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.810171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.810243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.810257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.814442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.814482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.814495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.819418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.819493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.819507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.823144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.823178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.823191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.827867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.827903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.827916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.831905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.831940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.831953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.836622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.836699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.836712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.843598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.843666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.843690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.848126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.848171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.848189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.852705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.072 [2024-07-15 19:35:20.852784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.072 [2024-07-15 19:35:20.852799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.072 [2024-07-15 19:35:20.856739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.073 [2024-07-15 19:35:20.856806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.073 [2024-07-15 19:35:20.856820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.073 [2024-07-15 19:35:20.860929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.073 [2024-07-15 19:35:20.860967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.073 [2024-07-15 19:35:20.860981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.073 [2024-07-15 19:35:20.865093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.073 [2024-07-15 19:35:20.865160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.073 [2024-07-15 19:35:20.865173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.073 [2024-07-15 19:35:20.869704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.073 [2024-07-15 19:35:20.869754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.073 [2024-07-15 19:35:20.869782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.874627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.874676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.874688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.878441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.878476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.878489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.882790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.882826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.882840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.886903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.886940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.886952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.890955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.890993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.891006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.895005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.895041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.895054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.899314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.899351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.899377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.902555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.902592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.902620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.906199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.906247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.906261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.910841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.910877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.910890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.915509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.915545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.915557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.918881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.918917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.918930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.923272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.923325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.923337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.928286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.928321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.928334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.931865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.931914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.931926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.936017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.936050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.936063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.939670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.939703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.939715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.944038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.944088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.944100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.948479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.948527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.948539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.951785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.951833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.951844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.955509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.955558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.955569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.960243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.960291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.960304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.964688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.964736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.964748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.968350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.968410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.968424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.971934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.971982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.971994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.976436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.976467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.976479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.981291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.981327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.981340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.984736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.984782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.984794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.334 [2024-07-15 19:35:20.989087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.334 [2024-07-15 19:35:20.989136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.334 [2024-07-15 19:35:20.989149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:20.993069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:20.993116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:20.993128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:20.996470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:20.996517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:20.996530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.000814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.000862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.000874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.004529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.004579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.004605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.008285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.008334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.008346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.012436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.012481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.012494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.016639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.016688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.016715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.020449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.020496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.020508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.024806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.024871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.024884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.029258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.029308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.029320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.032612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.032660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.032672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.037054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.037105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.037134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.040902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.040953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.040966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.045891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.045931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.045945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.050325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.050387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.050402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.055334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.055389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.055404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.058791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.058828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.058841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.063662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.063714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.063727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.067220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.067272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.067285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.071564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.071615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.071627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.076422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.076458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.076471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.079612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.079649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.079662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.084128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.084185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.084215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.088102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.088156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.088169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.091661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.091712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.091724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.096238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.096274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.096287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.101206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.101255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.101267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.104903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.104951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.104963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.109106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.335 [2024-07-15 19:35:21.109155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.335 [2024-07-15 19:35:21.109166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.335 [2024-07-15 19:35:21.112659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.336 [2024-07-15 19:35:21.112707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.336 [2024-07-15 19:35:21.112720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.336 [2024-07-15 19:35:21.117172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.336 [2024-07-15 19:35:21.117222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.336 [2024-07-15 19:35:21.117233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.336 [2024-07-15 19:35:21.121582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.336 [2024-07-15 19:35:21.121633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.336 [2024-07-15 19:35:21.121645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.336 [2024-07-15 19:35:21.124795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.336 [2024-07-15 19:35:21.124845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.336 [2024-07-15 19:35:21.124857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.336 [2024-07-15 19:35:21.129243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.336 [2024-07-15 19:35:21.129281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.336 [2024-07-15 19:35:21.129294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.336 [2024-07-15 19:35:21.133733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.336 [2024-07-15 19:35:21.133789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.336 [2024-07-15 19:35:21.133803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.596 [2024-07-15 19:35:21.137717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.596 [2024-07-15 19:35:21.137769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.596 [2024-07-15 19:35:21.137782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.596 [2024-07-15 19:35:21.142183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.596 [2024-07-15 19:35:21.142233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.596 [2024-07-15 19:35:21.142247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.596 [2024-07-15 19:35:21.146099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.596 [2024-07-15 19:35:21.146152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.596 [2024-07-15 19:35:21.146164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.596 [2024-07-15 19:35:21.150214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.596 [2024-07-15 19:35:21.150267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.596 [2024-07-15 19:35:21.150280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.596 [2024-07-15 19:35:21.154174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.596 [2024-07-15 19:35:21.154248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.596 [2024-07-15 19:35:21.154262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.596 [2024-07-15 19:35:21.158268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.596 [2024-07-15 19:35:21.158306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.596 [2024-07-15 19:35:21.158319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.596 [2024-07-15 19:35:21.162680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.596 [2024-07-15 19:35:21.162734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.596 [2024-07-15 19:35:21.162746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.596 [2024-07-15 19:35:21.166719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.596 [2024-07-15 19:35:21.166786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.596 [2024-07-15 19:35:21.166798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.596 [2024-07-15 19:35:21.171067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.596 [2024-07-15 19:35:21.171117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.596 [2024-07-15 19:35:21.171129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.596 [2024-07-15 19:35:21.175205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.596 [2024-07-15 19:35:21.175256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.596 [2024-07-15 19:35:21.175269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.596 [2024-07-15 19:35:21.179448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.596 [2024-07-15 19:35:21.179496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.596 [2024-07-15 19:35:21.179509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.596 [2024-07-15 19:35:21.183548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.596 [2024-07-15 19:35:21.183602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.596 [2024-07-15 19:35:21.183616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.596 [2024-07-15 19:35:21.187585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.596 [2024-07-15 19:35:21.187649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.596 [2024-07-15 19:35:21.187663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.596 [2024-07-15 19:35:21.191454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.596 [2024-07-15 19:35:21.191513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.596 [2024-07-15 19:35:21.191527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.596 [2024-07-15 19:35:21.195185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.596 [2024-07-15 19:35:21.195243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.596 [2024-07-15 19:35:21.195256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.596 [2024-07-15 19:35:21.199900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.616 [2024-07-15 19:35:21.199946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.616 [2024-07-15 19:35:21.199960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.616 [2024-07-15 19:35:21.204629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.616 [2024-07-15 19:35:21.204680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.616 [2024-07-15 19:35:21.204694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.616 [2024-07-15 19:35:21.209398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.616 [2024-07-15 19:35:21.209444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.616 [2024-07-15 19:35:21.209458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.616 [2024-07-15 19:35:21.212660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.616 [2024-07-15 19:35:21.212703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.616 [2024-07-15 19:35:21.212716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.616 [2024-07-15 19:35:21.217012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.616 [2024-07-15 19:35:21.217061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.616 [2024-07-15 19:35:21.217075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.616 [2024-07-15 19:35:21.221243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.616 [2024-07-15 19:35:21.221291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.616 [2024-07-15 19:35:21.221304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.224861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.224907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.224922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.229308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.229371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.229386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.233584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.233630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.233644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.237150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.237192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.237206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.242070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.242119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.242132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.246355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.246409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.246423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.250497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.250534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.250548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.255660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.255710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.255722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.259776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.259825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.259837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.263108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.263157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.263169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.268219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.268257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.268270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.273319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.273376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.273391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.276236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.276272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.276285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.281129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.281190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.281220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.285521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.285560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.285573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.289195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.289231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.289244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.293075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.293111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.293125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.297175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.297240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.297253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.301607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.301643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.301656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.305022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.305058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.305070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.308784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.308827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.308840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.313459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.313505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.313519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.317720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.317763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.317777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.321657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.321695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.321709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.326180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.326231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.617 [2024-07-15 19:35:21.326245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.617 [2024-07-15 19:35:21.329803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.617 [2024-07-15 19:35:21.329841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.618 [2024-07-15 19:35:21.329854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.618 [2024-07-15 19:35:21.333454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.618 [2024-07-15 19:35:21.333489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.618 [2024-07-15 19:35:21.333502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.618 [2024-07-15 19:35:21.338498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.618 [2024-07-15 19:35:21.338535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.618 [2024-07-15 19:35:21.338548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.618 [2024-07-15 19:35:21.341751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.618 [2024-07-15 19:35:21.341801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.618 [2024-07-15 19:35:21.341814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.618 [2024-07-15 19:35:21.346081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.618 [2024-07-15 19:35:21.346116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.618 [2024-07-15 19:35:21.346129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.618 [2024-07-15 19:35:21.351333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.618 [2024-07-15 19:35:21.351380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.618 [2024-07-15 19:35:21.351395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.618 [2024-07-15 19:35:21.354851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.618 [2024-07-15 19:35:21.354885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.618 [2024-07-15 19:35:21.354898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.618 [2024-07-15 19:35:21.359127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.618 [2024-07-15 19:35:21.359164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.618 [2024-07-15 19:35:21.359176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.618 [2024-07-15 19:35:21.362968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.618 [2024-07-15 19:35:21.363003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.618 [2024-07-15 19:35:21.363016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.618 [2024-07-15 19:35:21.367304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.618 [2024-07-15 19:35:21.367355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.618 [2024-07-15 19:35:21.367367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.618 [2024-07-15 19:35:21.371042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.618 [2024-07-15 19:35:21.371092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.618 [2024-07-15 19:35:21.371105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.618 [2024-07-15 19:35:21.375448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.618 [2024-07-15 19:35:21.375484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.618 [2024-07-15 19:35:21.375497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.618 [2024-07-15 19:35:21.380722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.618 [2024-07-15 19:35:21.380772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.618 [2024-07-15 19:35:21.380801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.618 [2024-07-15 19:35:21.383974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.618 [2024-07-15 19:35:21.384022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.618 [2024-07-15 19:35:21.384035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.618 [2024-07-15 19:35:21.387995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.618 [2024-07-15 19:35:21.388045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.618 [2024-07-15 19:35:21.388057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.618 [2024-07-15 19:35:21.392055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.618 [2024-07-15 19:35:21.392091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.618 [2024-07-15 19:35:21.392103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.618 [2024-07-15 19:35:21.396118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.618 [2024-07-15 19:35:21.396154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.618 [2024-07-15 19:35:21.396166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.878 [2024-07-15 19:35:21.399685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.878 [2024-07-15 19:35:21.399735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.878 [2024-07-15 19:35:21.399748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.878 [2024-07-15 19:35:21.404265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.878 [2024-07-15 19:35:21.404301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.878 [2024-07-15 19:35:21.404314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.878 [2024-07-15 19:35:21.409188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.878 [2024-07-15 19:35:21.409240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.878 [2024-07-15 19:35:21.409254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.878 [2024-07-15 19:35:21.412695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.878 [2024-07-15 19:35:21.412743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.878 [2024-07-15 19:35:21.412755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.878 [2024-07-15 19:35:21.416785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.878 [2024-07-15 19:35:21.416835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.878 [2024-07-15 19:35:21.416849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.878 [2024-07-15 19:35:21.421394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.878 [2024-07-15 19:35:21.421453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.878 [2024-07-15 19:35:21.421465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.878 [2024-07-15 19:35:21.425996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.878 [2024-07-15 19:35:21.426047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.878 [2024-07-15 19:35:21.426076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.878 [2024-07-15 19:35:21.430353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.878 [2024-07-15 19:35:21.430398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.878 [2024-07-15 19:35:21.430412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.878 [2024-07-15 19:35:21.433780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.878 [2024-07-15 19:35:21.433831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.878 [2024-07-15 19:35:21.433844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.878 [2024-07-15 19:35:21.438186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.878 [2024-07-15 19:35:21.438259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.878 [2024-07-15 19:35:21.438278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.878 [2024-07-15 19:35:21.441858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.878 [2024-07-15 19:35:21.441908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.878 [2024-07-15 19:35:21.441937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.878 [2024-07-15 19:35:21.445886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.445937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.445949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.450733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.450784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.450797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.454218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.454252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.454265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.458569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.458605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.458617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.462775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.462833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.462846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.466346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.466392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.466405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.470678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.470727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.470739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.475165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.475202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.475216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.478778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.478827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.478839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.482971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.483021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.483034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.487202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.487251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.487263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.490807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.490858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.490872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.495114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.495163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.495176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.498463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.498498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.498511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.502810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.502858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.502870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.507446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.507493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.507504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.512164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.512215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.512227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.515497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.515545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.515558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.519587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.519635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.519647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.524078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.524128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.524141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.527541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.527589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.527601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.531940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.531990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.879 [2024-07-15 19:35:21.532003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.879 [2024-07-15 19:35:21.535618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.879 [2024-07-15 19:35:21.535667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.880 [2024-07-15 19:35:21.535679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.880 [2024-07-15 19:35:21.539306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.880 [2024-07-15 19:35:21.539356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.880 [2024-07-15 19:35:21.539367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.880 [2024-07-15 19:35:21.543736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.880 [2024-07-15 19:35:21.543801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.880 [2024-07-15 19:35:21.543815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.880 [2024-07-15 19:35:21.547797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.880 [2024-07-15 19:35:21.547851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.880 [2024-07-15 19:35:21.547863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.880 [2024-07-15 19:35:21.551885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.880 [2024-07-15 19:35:21.551936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.880 [2024-07-15 19:35:21.551949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.880 [2024-07-15 19:35:21.555977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.880 [2024-07-15 19:35:21.556027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.880 [2024-07-15 19:35:21.556039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.880 [2024-07-15 19:35:21.560573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.880 [2024-07-15 19:35:21.560638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.880 [2024-07-15 19:35:21.560650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.880 [2024-07-15 19:35:21.564283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.880 [2024-07-15 19:35:21.564333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.880 [2024-07-15 19:35:21.564346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.880 [2024-07-15 19:35:21.568349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.880 [2024-07-15 19:35:21.568396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.880 [2024-07-15 19:35:21.568409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.880 [2024-07-15 19:35:21.572739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.880 [2024-07-15 19:35:21.572805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.880 [2024-07-15 19:35:21.572817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.880 [2024-07-15 19:35:21.576524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.880 [2024-07-15 19:35:21.576560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.880 [2024-07-15 19:35:21.576573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.880 [2024-07-15 19:35:21.580818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.880 [2024-07-15 19:35:21.580874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.880 [2024-07-15 19:35:21.580888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.880 [2024-07-15 19:35:21.584888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc380e0) 00:19:31.880 [2024-07-15 19:35:21.584946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.880 [2024-07-15 19:35:21.584960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.880 00:19:31.880 Latency(us) 00:19:31.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.880 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:31.880 nvme0n1 : 2.00 7265.31 908.16 0.00 0.00 2197.67 614.40 7447.27 00:19:31.880 =================================================================================================================== 00:19:31.880 Total : 7265.31 908.16 0.00 0.00 2197.67 614.40 7447.27 00:19:31.880 0 00:19:31.880 19:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:31.880 19:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:31.880 19:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:31.880 19:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:31.880 | .driver_specific 00:19:31.880 | .nvme_error 00:19:31.880 | .status_code 00:19:31.880 | .command_transient_transport_error' 00:19:32.139 19:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 469 > 0 )) 00:19:32.139 19:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93538 00:19:32.139 19:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93538 ']' 00:19:32.139 19:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93538 00:19:32.139 19:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:32.139 19:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:32.139 19:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93538 00:19:32.139 19:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:32.139 19:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:32.139 killing process with pid 93538 00:19:32.139 19:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93538' 00:19:32.139 Received shutdown signal, test time was about 2.000000 seconds 00:19:32.139 00:19:32.139 Latency(us) 00:19:32.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.139 =================================================================================================================== 00:19:32.139 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.139 19:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93538 00:19:32.139 19:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93538 00:19:32.398 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:19:32.398 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:32.398 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:32.398 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:32.398 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:32.398 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93610 00:19:32.398 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:19:32.398 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93610 /var/tmp/bperf.sock 00:19:32.398 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93610 ']' 00:19:32.398 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:32.398 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:32.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:32.398 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:32.398 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:32.398 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:32.398 [2024-07-15 19:35:22.131779] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:19:32.398 [2024-07-15 19:35:22.131864] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93610 ] 00:19:32.657 [2024-07-15 19:35:22.265568] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.657 [2024-07-15 19:35:22.325102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.657 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:32.657 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:32.657 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:32.657 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:33.223 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:33.223 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.223 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:33.223 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.223 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:33.223 19:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:33.564 nvme0n1 00:19:33.564 19:35:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:33.564 19:35:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.564 19:35:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:33.564 19:35:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.564 19:35:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:33.564 19:35:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:33.564 Running I/O for 2 seconds... 00:19:33.564 [2024-07-15 19:35:23.220449] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ee5c8 00:19:33.564 [2024-07-15 19:35:23.221378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.564 [2024-07-15 19:35:23.221416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:33.564 [2024-07-15 19:35:23.231519] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190fac10 00:19:33.564 [2024-07-15 19:35:23.232293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.564 [2024-07-15 19:35:23.232335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:33.564 [2024-07-15 19:35:23.245726] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f1430 00:19:33.564 [2024-07-15 19:35:23.247173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.564 [2024-07-15 19:35:23.247212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:33.564 [2024-07-15 19:35:23.257899] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f4f40 00:19:33.564 [2024-07-15 19:35:23.259325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.564 [2024-07-15 19:35:23.259371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:33.564 [2024-07-15 19:35:23.268724] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f1868 00:19:33.564 [2024-07-15 19:35:23.269930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.564 [2024-07-15 19:35:23.269966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:33.564 [2024-07-15 19:35:23.281575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f7970 00:19:33.564 [2024-07-15 19:35:23.282981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.564 [2024-07-15 19:35:23.283018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:33.564 [2024-07-15 19:35:23.292895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ff3c8 00:19:33.564 [2024-07-15 19:35:23.294090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.564 [2024-07-15 19:35:23.294126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:33.824 [2024-07-15 19:35:23.304972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e4578 00:19:33.824 [2024-07-15 19:35:23.306076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.824 [2024-07-15 19:35:23.306110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:33.824 [2024-07-15 19:35:23.317031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190eea00 00:19:33.824 [2024-07-15 19:35:23.317675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.824 [2024-07-15 19:35:23.317716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:33.824 [2024-07-15 19:35:23.332015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e6300 00:19:33.824 [2024-07-15 19:35:23.333962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.824 [2024-07-15 19:35:23.334004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:33.824 [2024-07-15 19:35:23.340839] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e49b0 00:19:33.824 [2024-07-15 19:35:23.341849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.341887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.355442] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f4298 00:19:33.825 [2024-07-15 19:35:23.357078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.357118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.366636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190fb8b8 00:19:33.825 [2024-07-15 19:35:23.367992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.368033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.378393] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f0ff8 00:19:33.825 [2024-07-15 19:35:23.379733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.379770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.389535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e73e0 00:19:33.825 [2024-07-15 19:35:23.390649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.390685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.401261] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190eb760 00:19:33.825 [2024-07-15 19:35:23.402338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.402388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.415729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f81e0 00:19:33.825 [2024-07-15 19:35:23.417481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.417519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.427844] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f2948 00:19:33.825 [2024-07-15 19:35:23.429580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.429616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.439099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190de038 00:19:33.825 [2024-07-15 19:35:23.440693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.440730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.447790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e0ea0 00:19:33.825 [2024-07-15 19:35:23.448609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.448645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.460417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e99d8 00:19:33.825 [2024-07-15 19:35:23.461324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.461374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.472534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ed0b0 00:19:33.825 [2024-07-15 19:35:23.473450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.473486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.483694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190fb480 00:19:33.825 [2024-07-15 19:35:23.484477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.484512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.498273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e9e10 00:19:33.825 [2024-07-15 19:35:23.499697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.499735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.507871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e84c0 00:19:33.825 [2024-07-15 19:35:23.508637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.508673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.522056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e1710 00:19:33.825 [2024-07-15 19:35:23.523485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.523524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.533176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f81e0 00:19:33.825 [2024-07-15 19:35:23.534435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.534472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.544636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e4578 00:19:33.825 [2024-07-15 19:35:23.545756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.545794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.556439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190fc128 00:19:33.825 [2024-07-15 19:35:23.557332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.557381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.569988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e88f8 00:19:33.825 [2024-07-15 19:35:23.571539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.571574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.582174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e38d0 00:19:33.825 [2024-07-15 19:35:23.583811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.583845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.593614] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190fdeb0 00:19:33.825 [2024-07-15 19:35:23.595059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.595093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.605012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f20d8 00:19:33.825 [2024-07-15 19:35:23.606301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.606339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:33.825 [2024-07-15 19:35:23.616478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f1868 00:19:33.825 [2024-07-15 19:35:23.617622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.825 [2024-07-15 19:35:23.617659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:34.085 [2024-07-15 19:35:23.628042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f31b8 00:19:34.085 [2024-07-15 19:35:23.629051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.085 [2024-07-15 19:35:23.629085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:34.085 [2024-07-15 19:35:23.641975] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e1710 00:19:34.085 [2024-07-15 19:35:23.643445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.085 [2024-07-15 19:35:23.643482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:34.085 [2024-07-15 19:35:23.653153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f6cc8 00:19:34.085 [2024-07-15 19:35:23.654294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.085 [2024-07-15 19:35:23.654332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:34.085 [2024-07-15 19:35:23.664850] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e5ec8 00:19:34.085 [2024-07-15 19:35:23.665970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.085 [2024-07-15 19:35:23.666007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:34.085 [2024-07-15 19:35:23.676968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ea248 00:19:34.085 [2024-07-15 19:35:23.678096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.085 [2024-07-15 19:35:23.678135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:34.085 [2024-07-15 19:35:23.688436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e1710 00:19:34.085 [2024-07-15 19:35:23.689405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.085 [2024-07-15 19:35:23.689444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:34.085 [2024-07-15 19:35:23.699801] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190fc998 00:19:34.085 [2024-07-15 19:35:23.700627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.085 [2024-07-15 19:35:23.700666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:34.085 [2024-07-15 19:35:23.713828] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e27f0 00:19:34.085 [2024-07-15 19:35:23.715327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.085 [2024-07-15 19:35:23.715376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:34.085 [2024-07-15 19:35:23.725858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e5658 00:19:34.085 [2024-07-15 19:35:23.726848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.085 [2024-07-15 19:35:23.726889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:34.085 [2024-07-15 19:35:23.737272] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f35f0 00:19:34.085 [2024-07-15 19:35:23.738152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.085 [2024-07-15 19:35:23.738191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:34.085 [2024-07-15 19:35:23.748024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190fa3a0 00:19:34.085 [2024-07-15 19:35:23.749037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.085 [2024-07-15 19:35:23.749074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:34.085 [2024-07-15 19:35:23.760216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f6020 00:19:34.085 [2024-07-15 19:35:23.761209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.085 [2024-07-15 19:35:23.761245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:34.085 [2024-07-15 19:35:23.771640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f20d8 00:19:34.085 [2024-07-15 19:35:23.772492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.085 [2024-07-15 19:35:23.772527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:34.085 [2024-07-15 19:35:23.783308] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190fbcf0 00:19:34.085 [2024-07-15 19:35:23.784152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.085 [2024-07-15 19:35:23.784188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:34.085 [2024-07-15 19:35:23.797871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f20d8 00:19:34.085 [2024-07-15 19:35:23.799415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.085 [2024-07-15 19:35:23.799455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:34.085 [2024-07-15 19:35:23.809986] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f8e88 00:19:34.085 [2024-07-15 19:35:23.811529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.085 [2024-07-15 19:35:23.811565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.085 [2024-07-15 19:35:23.821712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f2510 00:19:34.085 [2024-07-15 19:35:23.822744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.085 [2024-07-15 19:35:23.822780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.085 [2024-07-15 19:35:23.833577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ebfd0 00:19:34.085 [2024-07-15 19:35:23.834948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.085 [2024-07-15 19:35:23.834988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:34.085 [2024-07-15 19:35:23.844840] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ed0b0 00:19:34.085 [2024-07-15 19:35:23.846171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.085 [2024-07-15 19:35:23.846213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:34.086 [2024-07-15 19:35:23.856811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e9168 00:19:34.086 [2024-07-15 19:35:23.857667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.086 [2024-07-15 19:35:23.857705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:34.086 [2024-07-15 19:35:23.868703] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190eff18 00:19:34.086 [2024-07-15 19:35:23.869902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.086 [2024-07-15 19:35:23.869938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:34.086 [2024-07-15 19:35:23.879867] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f7da8 00:19:34.086 [2024-07-15 19:35:23.880917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.086 [2024-07-15 19:35:23.880959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:34.450 [2024-07-15 19:35:23.892605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f3e60 00:19:34.450 [2024-07-15 19:35:23.893300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.450 [2024-07-15 19:35:23.893382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.450 [2024-07-15 19:35:23.903937] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e5658 00:19:34.450 [2024-07-15 19:35:23.904868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.450 [2024-07-15 19:35:23.904917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:34.450 [2024-07-15 19:35:23.918944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f1868 00:19:34.450 [2024-07-15 19:35:23.920631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.450 [2024-07-15 19:35:23.920671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:34.450 [2024-07-15 19:35:23.930399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ea680 00:19:34.450 [2024-07-15 19:35:23.931766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.450 [2024-07-15 19:35:23.931810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:34.450 [2024-07-15 19:35:23.942001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190eff18 00:19:34.450 [2024-07-15 19:35:23.943160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.450 [2024-07-15 19:35:23.943244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:34.450 [2024-07-15 19:35:23.956680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f57b0 00:19:34.450 [2024-07-15 19:35:23.958634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.450 [2024-07-15 19:35:23.958670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:34.450 [2024-07-15 19:35:23.965354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f0ff8 00:19:34.451 [2024-07-15 19:35:23.966109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.451 [2024-07-15 19:35:23.966157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:34.451 [2024-07-15 19:35:23.978920] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e1b48 00:19:34.451 [2024-07-15 19:35:23.979895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.451 [2024-07-15 19:35:23.979931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:34.451 [2024-07-15 19:35:23.991303] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f1430 00:19:34.451 [2024-07-15 19:35:23.992558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.451 [2024-07-15 19:35:23.992597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:34.451 [2024-07-15 19:35:24.002787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f5be8 00:19:34.451 [2024-07-15 19:35:24.003878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.451 [2024-07-15 19:35:24.003929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:34.451 [2024-07-15 19:35:24.014414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e5658 00:19:34.451 [2024-07-15 19:35:24.015344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.451 [2024-07-15 19:35:24.015391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:34.451 [2024-07-15 19:35:24.027877] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ed4e8 00:19:34.451 [2024-07-15 19:35:24.029377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.451 [2024-07-15 19:35:24.029455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:34.451 [2024-07-15 19:35:24.038941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e1710 00:19:34.451 [2024-07-15 19:35:24.040181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.451 [2024-07-15 19:35:24.040218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:34.451 [2024-07-15 19:35:24.050471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f2948 00:19:34.451 [2024-07-15 19:35:24.051625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.451 [2024-07-15 19:35:24.051660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:34.451 [2024-07-15 19:35:24.062539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190dece0 00:19:34.451 [2024-07-15 19:35:24.063195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.451 [2024-07-15 19:35:24.063234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:34.451 [2024-07-15 19:35:24.076633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ebb98 00:19:34.451 [2024-07-15 19:35:24.078600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.451 [2024-07-15 19:35:24.078649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:34.451 [2024-07-15 19:35:24.085133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190fa7d8 00:19:34.451 [2024-07-15 19:35:24.085966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.451 [2024-07-15 19:35:24.085997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:34.451 [2024-07-15 19:35:24.098659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ef270 00:19:34.451 [2024-07-15 19:35:24.099669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.451 [2024-07-15 19:35:24.099706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:34.451 [2024-07-15 19:35:24.110232] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190eaef0 00:19:34.451 [2024-07-15 19:35:24.111044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.451 [2024-07-15 19:35:24.111078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:34.451 [2024-07-15 19:35:24.121787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ec408 00:19:34.451 [2024-07-15 19:35:24.122531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.451 [2024-07-15 19:35:24.122575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:34.451 [2024-07-15 19:35:24.136344] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ef6a8 00:19:34.451 [2024-07-15 19:35:24.138157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.451 [2024-07-15 19:35:24.138198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:34.451 [2024-07-15 19:35:24.145119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190fb048 00:19:34.451 [2024-07-15 19:35:24.145963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.451 [2024-07-15 19:35:24.146003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.159592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e6300 00:19:34.710 [2024-07-15 19:35:24.161075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.161116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.170581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f0788 00:19:34.710 [2024-07-15 19:35:24.171753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.171792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.182525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e0630 00:19:34.710 [2024-07-15 19:35:24.183555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.183594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.196708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e8088 00:19:34.710 [2024-07-15 19:35:24.198529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.198565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.205208] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ed0b0 00:19:34.710 [2024-07-15 19:35:24.206066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.206104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.217651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f4298 00:19:34.710 [2024-07-15 19:35:24.218679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.218718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.230089] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f9b30 00:19:34.710 [2024-07-15 19:35:24.231257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.231292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.242075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190dece0 00:19:34.710 [2024-07-15 19:35:24.243187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.243228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.255935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e7c50 00:19:34.710 [2024-07-15 19:35:24.257480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.257522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.267047] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190df118 00:19:34.710 [2024-07-15 19:35:24.268469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.268506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.278756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e3498 00:19:34.710 [2024-07-15 19:35:24.279959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.279994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.290073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f46d0 00:19:34.710 [2024-07-15 19:35:24.291118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.291155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.301461] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190de8a8 00:19:34.710 [2024-07-15 19:35:24.302387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.302427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.312624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190fbcf0 00:19:34.710 [2024-07-15 19:35:24.313373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.313418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.326681] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e5a90 00:19:34.710 [2024-07-15 19:35:24.328053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.328095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.337959] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e49b0 00:19:34.710 [2024-07-15 19:35:24.339123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.339164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.349654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f6020 00:19:34.710 [2024-07-15 19:35:24.350743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.350781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.364001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190feb58 00:19:34.710 [2024-07-15 19:35:24.365763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.365813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.372636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f8618 00:19:34.710 [2024-07-15 19:35:24.373484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.373523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.386907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190eb328 00:19:34.710 [2024-07-15 19:35:24.388348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.388396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.399177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e3498 00:19:34.710 [2024-07-15 19:35:24.400148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.400189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.412728] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f2d80 00:19:34.710 [2024-07-15 19:35:24.414578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.414625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.424250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190feb58 00:19:34.710 [2024-07-15 19:35:24.425902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.425946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.433046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190fa3a0 00:19:34.710 [2024-07-15 19:35:24.433836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.433873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.447420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190de038 00:19:34.710 [2024-07-15 19:35:24.448874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.448911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.459405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190de470 00:19:34.710 [2024-07-15 19:35:24.460382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.460420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.471284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190feb58 00:19:34.710 [2024-07-15 19:35:24.472611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.472646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:34.710 [2024-07-15 19:35:24.482266] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190fe720 00:19:34.710 [2024-07-15 19:35:24.483447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.710 [2024-07-15 19:35:24.483482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:34.711 [2024-07-15 19:35:24.494995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190fef90 00:19:34.711 [2024-07-15 19:35:24.496277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.711 [2024-07-15 19:35:24.496313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:34.711 [2024-07-15 19:35:24.506438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190eea00 00:19:34.711 [2024-07-15 19:35:24.507580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.711 [2024-07-15 19:35:24.507616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.519644] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e73e0 00:19:35.027 [2024-07-15 19:35:24.521270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.521309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.529533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f20d8 00:19:35.027 [2024-07-15 19:35:24.530229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.530282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.541763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e8d30 00:19:35.027 [2024-07-15 19:35:24.542947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.542992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.556214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f81e0 00:19:35.027 [2024-07-15 19:35:24.558074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.558116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.568764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ed920 00:19:35.027 [2024-07-15 19:35:24.570784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.570827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.577320] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ddc00 00:19:35.027 [2024-07-15 19:35:24.578356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.578402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.591615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f7100 00:19:35.027 [2024-07-15 19:35:24.593296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.593332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.600123] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ec840 00:19:35.027 [2024-07-15 19:35:24.600842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.600878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.614649] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f8a50 00:19:35.027 [2024-07-15 19:35:24.616072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.616118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.626038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e4578 00:19:35.027 [2024-07-15 19:35:24.627213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.627258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.637821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e6b70 00:19:35.027 [2024-07-15 19:35:24.638968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.639012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.650026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190fda78 00:19:35.027 [2024-07-15 19:35:24.650712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.650769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.662706] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190fd640 00:19:35.027 [2024-07-15 19:35:24.663527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.663574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.674397] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e1f80 00:19:35.027 [2024-07-15 19:35:24.675108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.675143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.689455] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f8618 00:19:35.027 [2024-07-15 19:35:24.691484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.691536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.698338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190fc998 00:19:35.027 [2024-07-15 19:35:24.699347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.699399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.713143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190fc128 00:19:35.027 [2024-07-15 19:35:24.714842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.714889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.725673] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190de470 00:19:35.027 [2024-07-15 19:35:24.727473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.727512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.734223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f6890 00:19:35.027 [2024-07-15 19:35:24.735070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.735106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.746728] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f31b8 00:19:35.027 [2024-07-15 19:35:24.747703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.747739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.758851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e8088 00:19:35.027 [2024-07-15 19:35:24.759856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.759913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:35.027 [2024-07-15 19:35:24.773719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e4140 00:19:35.027 [2024-07-15 19:35:24.775605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.027 [2024-07-15 19:35:24.775653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:24.782417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f1868 00:19:35.286 [2024-07-15 19:35:24.783269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:24.783306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:24.794503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e9168 00:19:35.286 [2024-07-15 19:35:24.795348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:24.795395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:24.808569] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f8e88 00:19:35.286 [2024-07-15 19:35:24.810086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:24.810137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:24.819901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e95a0 00:19:35.286 [2024-07-15 19:35:24.821255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:24.821299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:24.831711] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e7c50 00:19:35.286 [2024-07-15 19:35:24.832786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:24.832823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:24.846093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e5220 00:19:35.286 [2024-07-15 19:35:24.848011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:24.848052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:24.854701] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ed4e8 00:19:35.286 [2024-07-15 19:35:24.855588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:24.855623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:24.867181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ea248 00:19:35.286 [2024-07-15 19:35:24.868240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:24.868276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:24.881532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f31b8 00:19:35.286 [2024-07-15 19:35:24.883259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:24.883296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:24.890034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e3d08 00:19:35.286 [2024-07-15 19:35:24.890799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:24.890834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:24.904491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ddc00 00:19:35.286 [2024-07-15 19:35:24.905934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:24.905979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:24.916660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190edd58 00:19:35.286 [2024-07-15 19:35:24.918088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:24.918129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:24.928073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f8e88 00:19:35.286 [2024-07-15 19:35:24.929388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:24.929427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:24.939477] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e49b0 00:19:35.286 [2024-07-15 19:35:24.940638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:24.940677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:24.951177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e5a90 00:19:35.286 [2024-07-15 19:35:24.952306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:24.952345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:24.963273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190feb58 00:19:35.286 [2024-07-15 19:35:24.963927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:24.963960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:24.978062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f46d0 00:19:35.286 [2024-07-15 19:35:24.980026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:24.980064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:24.986580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e3060 00:19:35.286 [2024-07-15 19:35:24.987570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:24.987606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:25.000935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f7da8 00:19:35.286 [2024-07-15 19:35:25.002639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:25.002681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:25.013371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f0788 00:19:35.286 [2024-07-15 19:35:25.015201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:25.015238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:25.021867] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ea680 00:19:35.286 [2024-07-15 19:35:25.022733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:25.022768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:25.036199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e1f80 00:19:35.286 [2024-07-15 19:35:25.037580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:25.037615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:25.047521] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e4578 00:19:35.286 [2024-07-15 19:35:25.048718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:25.048759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:25.058602] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ef6a8 00:19:35.286 [2024-07-15 19:35:25.059698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:25.059738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:25.070246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f7538 00:19:35.286 [2024-07-15 19:35:25.071318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:25.071354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:35.286 [2024-07-15 19:35:25.084726] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ee5c8 00:19:35.286 [2024-07-15 19:35:25.086524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.286 [2024-07-15 19:35:25.086563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:35.584 [2024-07-15 19:35:25.093333] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f1430 00:19:35.584 [2024-07-15 19:35:25.094112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.584 [2024-07-15 19:35:25.094146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:35.584 [2024-07-15 19:35:25.107671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e6b70 00:19:35.584 [2024-07-15 19:35:25.109143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.584 [2024-07-15 19:35:25.109179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:35.584 [2024-07-15 19:35:25.119931] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190ed920 00:19:35.584 [2024-07-15 19:35:25.121440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.584 [2024-07-15 19:35:25.121490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:35.584 [2024-07-15 19:35:25.131050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e6738 00:19:35.584 [2024-07-15 19:35:25.132325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.584 [2024-07-15 19:35:25.132389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:35.584 [2024-07-15 19:35:25.142973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f4298 00:19:35.584 [2024-07-15 19:35:25.144154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.584 [2024-07-15 19:35:25.144196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:35.584 [2024-07-15 19:35:25.155182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f81e0 00:19:35.584 [2024-07-15 19:35:25.156351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.584 [2024-07-15 19:35:25.156405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:35.584 [2024-07-15 19:35:25.166721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f96f8 00:19:35.584 [2024-07-15 19:35:25.167734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.584 [2024-07-15 19:35:25.167772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:35.584 [2024-07-15 19:35:25.178118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190e84c0 00:19:35.584 [2024-07-15 19:35:25.178993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.584 [2024-07-15 19:35:25.179037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:35.584 [2024-07-15 19:35:25.192574] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190f8618 00:19:35.584 [2024-07-15 19:35:25.193613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.584 [2024-07-15 19:35:25.193657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:35.584 [2024-07-15 19:35:25.204260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2b90) with pdu=0x2000190de470 00:19:35.584 [2024-07-15 19:35:25.205164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.584 [2024-07-15 19:35:25.205208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:35.584 00:19:35.584 Latency(us) 00:19:35.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.584 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:35.584 nvme0n1 : 2.00 21193.13 82.79 0.00 0.00 6029.58 2368.23 16324.42 00:19:35.584 =================================================================================================================== 00:19:35.584 Total : 21193.13 82.79 0.00 0.00 6029.58 2368.23 16324.42 00:19:35.584 0 00:19:35.584 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:35.584 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:35.584 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:35.584 | .driver_specific 00:19:35.584 | .nvme_error 00:19:35.584 | .status_code 00:19:35.584 | .command_transient_transport_error' 00:19:35.584 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:35.857 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 166 > 0 )) 00:19:35.857 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93610 00:19:35.857 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93610 ']' 00:19:35.857 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93610 00:19:35.857 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:35.857 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:35.857 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93610 00:19:35.857 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:35.857 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:35.857 killing process with pid 93610 00:19:35.857 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93610' 00:19:35.857 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93610 00:19:35.857 Received shutdown signal, test time was about 2.000000 seconds 00:19:35.857 00:19:35.857 Latency(us) 00:19:35.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.857 =================================================================================================================== 00:19:35.857 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.857 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93610 00:19:36.177 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:19:36.177 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:36.177 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:36.177 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:36.177 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:36.177 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:19:36.177 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93687 00:19:36.177 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93687 /var/tmp/bperf.sock 00:19:36.177 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93687 ']' 00:19:36.177 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:36.177 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:36.177 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:36.177 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.177 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:36.177 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:36.177 Zero copy mechanism will not be used. 00:19:36.177 [2024-07-15 19:35:25.736189] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:19:36.177 [2024-07-15 19:35:25.736307] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93687 ] 00:19:36.177 [2024-07-15 19:35:25.867124] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.177 [2024-07-15 19:35:25.927713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.436 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.436 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:36.436 19:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:36.436 19:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:36.436 19:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:36.436 19:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.436 19:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:36.694 19:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.694 19:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:36.694 19:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:36.952 nvme0n1 00:19:36.952 19:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:36.952 19:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.952 19:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:36.952 19:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.952 19:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:36.952 19:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:36.952 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:36.952 Zero copy mechanism will not be used. 00:19:36.952 Running I/O for 2 seconds... 00:19:36.952 [2024-07-15 19:35:26.683904] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:36.952 [2024-07-15 19:35:26.684254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.952 [2024-07-15 19:35:26.684287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.952 [2024-07-15 19:35:26.689262] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:36.952 [2024-07-15 19:35:26.689592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.952 [2024-07-15 19:35:26.689633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.952 [2024-07-15 19:35:26.694639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:36.952 [2024-07-15 19:35:26.694956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.952 [2024-07-15 19:35:26.694987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.952 [2024-07-15 19:35:26.700037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:36.952 [2024-07-15 19:35:26.700369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.952 [2024-07-15 19:35:26.700414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.952 [2024-07-15 19:35:26.705440] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:36.952 [2024-07-15 19:35:26.705781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.952 [2024-07-15 19:35:26.705808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.952 [2024-07-15 19:35:26.710860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:36.952 [2024-07-15 19:35:26.711162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.952 [2024-07-15 19:35:26.711209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.952 [2024-07-15 19:35:26.716155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:36.952 [2024-07-15 19:35:26.716477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.952 [2024-07-15 19:35:26.716514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.952 [2024-07-15 19:35:26.721554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:36.952 [2024-07-15 19:35:26.721895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.952 [2024-07-15 19:35:26.721932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.952 [2024-07-15 19:35:26.727018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:36.952 [2024-07-15 19:35:26.727362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.952 [2024-07-15 19:35:26.727409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.952 [2024-07-15 19:35:26.732435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:36.952 [2024-07-15 19:35:26.732732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.952 [2024-07-15 19:35:26.732761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.952 [2024-07-15 19:35:26.737713] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:36.952 [2024-07-15 19:35:26.738039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.952 [2024-07-15 19:35:26.738065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.952 [2024-07-15 19:35:26.743039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:36.952 [2024-07-15 19:35:26.743401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.952 [2024-07-15 19:35:26.743451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.952 [2024-07-15 19:35:26.748649] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:36.952 [2024-07-15 19:35:26.748978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.952 [2024-07-15 19:35:26.749015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.952 [2024-07-15 19:35:26.754124] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:36.952 [2024-07-15 19:35:26.754490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.952 [2024-07-15 19:35:26.754529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.759546] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.759898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.759925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.764979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.765297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.765331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.770232] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.770558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.770607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.775607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.775955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.775995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.781019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.781367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.781421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.786461] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.786776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.786813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.791837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.792165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.792205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.797138] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.797508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.797537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.802672] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.803015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.803056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.808240] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.808594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.808632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.813756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.814107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.814151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.819345] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.819691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.819725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.824800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.825130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.825168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.830155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.830492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.830529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.835542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.835879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.835912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.840743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.841078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.841117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.845870] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.846203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.846247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.851339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.851702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.851728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.856579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.856927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.856961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.862018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.862398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.862436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.867480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.867795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.867835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.872720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.873040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.873074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.878121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.878468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.878502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.883439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.883749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.883793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.888659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.888979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.889018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.893933] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.894262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.894296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.899303] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.899653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.899693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.905078] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.211 [2024-07-15 19:35:26.905531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.211 [2024-07-15 19:35:26.905566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.211 [2024-07-15 19:35:26.911146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.212 [2024-07-15 19:35:26.911524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.212 [2024-07-15 19:35:26.911555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.212 [2024-07-15 19:35:26.916675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.212 [2024-07-15 19:35:26.916976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.212 [2024-07-15 19:35:26.917015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.212 [2024-07-15 19:35:26.922155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.212 [2024-07-15 19:35:26.922495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.212 [2024-07-15 19:35:26.922524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.212 [2024-07-15 19:35:26.927654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.212 [2024-07-15 19:35:26.927998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.212 [2024-07-15 19:35:26.928034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.212 [2024-07-15 19:35:26.933383] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.212 [2024-07-15 19:35:26.933692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.212 [2024-07-15 19:35:26.933729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.212 [2024-07-15 19:35:26.938779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.212 [2024-07-15 19:35:26.939076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.212 [2024-07-15 19:35:26.939100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.212 [2024-07-15 19:35:26.944015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.212 [2024-07-15 19:35:26.944327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.212 [2024-07-15 19:35:26.944384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.212 [2024-07-15 19:35:26.949253] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.212 [2024-07-15 19:35:26.949562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.212 [2024-07-15 19:35:26.949597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.212 [2024-07-15 19:35:26.954624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.212 [2024-07-15 19:35:26.954970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.212 [2024-07-15 19:35:26.955009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.212 [2024-07-15 19:35:26.960388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.212 [2024-07-15 19:35:26.960702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.212 [2024-07-15 19:35:26.960740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.212 [2024-07-15 19:35:26.965716] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.212 [2024-07-15 19:35:26.966014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.212 [2024-07-15 19:35:26.966052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.212 [2024-07-15 19:35:26.971101] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.212 [2024-07-15 19:35:26.971429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.212 [2024-07-15 19:35:26.971466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.212 [2024-07-15 19:35:26.976477] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.212 [2024-07-15 19:35:26.976772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.212 [2024-07-15 19:35:26.976819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.212 [2024-07-15 19:35:26.981951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.212 [2024-07-15 19:35:26.982255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.212 [2024-07-15 19:35:26.982294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.212 [2024-07-15 19:35:26.987219] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.212 [2024-07-15 19:35:26.987531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.212 [2024-07-15 19:35:26.987568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.212 [2024-07-15 19:35:26.992644] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.212 [2024-07-15 19:35:26.992956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.212 [2024-07-15 19:35:26.992993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.212 [2024-07-15 19:35:26.998448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.212 [2024-07-15 19:35:26.998753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.212 [2024-07-15 19:35:26.998789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.212 [2024-07-15 19:35:27.004030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.212 [2024-07-15 19:35:27.004401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.212 [2024-07-15 19:35:27.004455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.212 [2024-07-15 19:35:27.009588] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.212 [2024-07-15 19:35:27.009993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.212 [2024-07-15 19:35:27.010032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.471 [2024-07-15 19:35:27.015600] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.471 [2024-07-15 19:35:27.015985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.471 [2024-07-15 19:35:27.016027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.471 [2024-07-15 19:35:27.021092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.021491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.021527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.026534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.026957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.026997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.031683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.032046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.032084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.036850] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.037194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.037235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.041956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.042330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.042380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.047226] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.047607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.047646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.052280] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.052655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.052693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.057398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.057742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.057781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.062707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.063075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.063114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.068028] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.068381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.068432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.073021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.073386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.073435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.078067] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.078435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.078476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.083140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.083455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.083478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.088140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.088455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.088482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.093165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.093468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.093490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.098104] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.098445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.098469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.103164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.103470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.103493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.108498] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.108797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.108823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.113556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.113840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.113865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.118635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.118949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.118975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.123898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.124264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.124292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.129585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.129966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.129997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.135086] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.135429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.135487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.140281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.140615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.140642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.145438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.145720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.145746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.150463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.150791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.150816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.155635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.155933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.155959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.161100] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.161412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.161452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.166709] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.167024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.472 [2024-07-15 19:35:27.167050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.472 [2024-07-15 19:35:27.171919] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.472 [2024-07-15 19:35:27.172203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.473 [2024-07-15 19:35:27.172229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.473 [2024-07-15 19:35:27.177117] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.473 [2024-07-15 19:35:27.177445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.473 [2024-07-15 19:35:27.177472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.473 [2024-07-15 19:35:27.182263] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.473 [2024-07-15 19:35:27.182573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.473 [2024-07-15 19:35:27.182605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.473 [2024-07-15 19:35:27.187710] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.473 [2024-07-15 19:35:27.188014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.473 [2024-07-15 19:35:27.188042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.473 [2024-07-15 19:35:27.193066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.473 [2024-07-15 19:35:27.193381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.473 [2024-07-15 19:35:27.193440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.473 [2024-07-15 19:35:27.198191] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.473 [2024-07-15 19:35:27.198533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.473 [2024-07-15 19:35:27.198568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.473 [2024-07-15 19:35:27.203418] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.473 [2024-07-15 19:35:27.203731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.473 [2024-07-15 19:35:27.203762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.473 [2024-07-15 19:35:27.208591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.473 [2024-07-15 19:35:27.208893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.473 [2024-07-15 19:35:27.208923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.473 [2024-07-15 19:35:27.213665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.473 [2024-07-15 19:35:27.213973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.473 [2024-07-15 19:35:27.214004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.473 [2024-07-15 19:35:27.219024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.473 [2024-07-15 19:35:27.219313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.473 [2024-07-15 19:35:27.219342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.473 [2024-07-15 19:35:27.224287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.473 [2024-07-15 19:35:27.224643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.473 [2024-07-15 19:35:27.224673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.473 [2024-07-15 19:35:27.229506] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.473 [2024-07-15 19:35:27.229817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.473 [2024-07-15 19:35:27.229844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.473 [2024-07-15 19:35:27.234871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.473 [2024-07-15 19:35:27.235173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.473 [2024-07-15 19:35:27.235200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.473 [2024-07-15 19:35:27.240064] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.473 [2024-07-15 19:35:27.240367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.473 [2024-07-15 19:35:27.240403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.473 [2024-07-15 19:35:27.245301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.473 [2024-07-15 19:35:27.245693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.473 [2024-07-15 19:35:27.245726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.473 [2024-07-15 19:35:27.250757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.473 [2024-07-15 19:35:27.251154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.473 [2024-07-15 19:35:27.251204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.473 [2024-07-15 19:35:27.256598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.473 [2024-07-15 19:35:27.257253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.473 [2024-07-15 19:35:27.257445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.473 [2024-07-15 19:35:27.262530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.473 [2024-07-15 19:35:27.262948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.473 [2024-07-15 19:35:27.262981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.473 [2024-07-15 19:35:27.268284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.473 [2024-07-15 19:35:27.268696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.473 [2024-07-15 19:35:27.268737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.733 [2024-07-15 19:35:27.274019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.733 [2024-07-15 19:35:27.274408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.733 [2024-07-15 19:35:27.274437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.733 [2024-07-15 19:35:27.279511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.733 [2024-07-15 19:35:27.279879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.733 [2024-07-15 19:35:27.279909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.733 [2024-07-15 19:35:27.284831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.733 [2024-07-15 19:35:27.285140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.733 [2024-07-15 19:35:27.285166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.733 [2024-07-15 19:35:27.289981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.733 [2024-07-15 19:35:27.290354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.733 [2024-07-15 19:35:27.290400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.733 [2024-07-15 19:35:27.295198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.733 [2024-07-15 19:35:27.295570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.733 [2024-07-15 19:35:27.295602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.733 [2024-07-15 19:35:27.300460] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.733 [2024-07-15 19:35:27.300741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.733 [2024-07-15 19:35:27.300781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.733 [2024-07-15 19:35:27.305759] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.733 [2024-07-15 19:35:27.306077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.733 [2024-07-15 19:35:27.306105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.733 [2024-07-15 19:35:27.311017] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.733 [2024-07-15 19:35:27.311351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.733 [2024-07-15 19:35:27.311387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.733 [2024-07-15 19:35:27.316307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.733 [2024-07-15 19:35:27.316650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.733 [2024-07-15 19:35:27.316682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.733 [2024-07-15 19:35:27.322030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.733 [2024-07-15 19:35:27.322406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.733 [2024-07-15 19:35:27.322434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.733 [2024-07-15 19:35:27.327385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.733 [2024-07-15 19:35:27.327746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.733 [2024-07-15 19:35:27.327793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.733 [2024-07-15 19:35:27.332486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.733 [2024-07-15 19:35:27.332772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.733 [2024-07-15 19:35:27.332798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.733 [2024-07-15 19:35:27.337662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.733 [2024-07-15 19:35:27.337947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.733 [2024-07-15 19:35:27.337973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.733 [2024-07-15 19:35:27.342705] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.733 [2024-07-15 19:35:27.343004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.733 [2024-07-15 19:35:27.343030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.733 [2024-07-15 19:35:27.347773] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.733 [2024-07-15 19:35:27.348058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.733 [2024-07-15 19:35:27.348084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.733 [2024-07-15 19:35:27.352884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.733 [2024-07-15 19:35:27.353170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.733 [2024-07-15 19:35:27.353197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.733 [2024-07-15 19:35:27.357963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.733 [2024-07-15 19:35:27.358357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.733 [2024-07-15 19:35:27.358398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.733 [2024-07-15 19:35:27.363292] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.363716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.363743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.368622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.368946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.368972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.373737] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.374024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.374065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.378884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.379168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.379195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.383990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.384297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.384324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.389054] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.389355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.389390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.394010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.394352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.394390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.399603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.399978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.400008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.404886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.405186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.405217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.410065] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.410423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.410451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.415446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.415776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.415807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.421224] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.421548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.421590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.427021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.427333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.427372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.432531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.432870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.432894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.438065] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.438415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.438440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.443489] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.443788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.443812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.448764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.449075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.449108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.454113] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.454465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.454493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.459511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.459836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.459863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.465111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.465450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.465477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.470486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.470822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.470881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.476040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.476382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.476419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.481542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.481879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.481905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.734 [2024-07-15 19:35:27.487095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.734 [2024-07-15 19:35:27.487434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.734 [2024-07-15 19:35:27.487469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.735 [2024-07-15 19:35:27.492604] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.735 [2024-07-15 19:35:27.492935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.735 [2024-07-15 19:35:27.492962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.735 [2024-07-15 19:35:27.498192] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.735 [2024-07-15 19:35:27.498548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.735 [2024-07-15 19:35:27.498576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.735 [2024-07-15 19:35:27.503703] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.735 [2024-07-15 19:35:27.504025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.735 [2024-07-15 19:35:27.504052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.735 [2024-07-15 19:35:27.509025] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.735 [2024-07-15 19:35:27.509321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.735 [2024-07-15 19:35:27.509347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.735 [2024-07-15 19:35:27.514094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.735 [2024-07-15 19:35:27.514455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.735 [2024-07-15 19:35:27.514483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.735 [2024-07-15 19:35:27.519233] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.735 [2024-07-15 19:35:27.519524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.735 [2024-07-15 19:35:27.519551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.735 [2024-07-15 19:35:27.524361] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.735 [2024-07-15 19:35:27.524740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.735 [2024-07-15 19:35:27.524776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.735 [2024-07-15 19:35:27.529905] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.735 [2024-07-15 19:35:27.530245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.735 [2024-07-15 19:35:27.530279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.994 [2024-07-15 19:35:27.535451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.535823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.535847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.540954] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.541253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.541282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.546434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.546730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.546758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.551851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.552147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.552175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.557119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.557445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.557473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.562426] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.562750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.562778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.567668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.567966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.567993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.572952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.573262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.573291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.578432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.578812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.578846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.583913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.584217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.584246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.589246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.589574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.589600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.594862] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.595156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.595184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.600250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.600595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.600654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.605740] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.606027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.606052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.611092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.611380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.611432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.616526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.616839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.616882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.621622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.621944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.621969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.627278] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.627568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.627595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.632882] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.633239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.633284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.638572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.638878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.638919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.643855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.644144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.644170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.649109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.649427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.649454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.654468] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.654773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.654815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.659883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.660176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.660204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.665216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.665584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.665618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.670791] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.671063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.671091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.676202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.995 [2024-07-15 19:35:27.676521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.995 [2024-07-15 19:35:27.676549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.995 [2024-07-15 19:35:27.681606] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.681904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.681933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.686962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.687303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.687326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.692521] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.692813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.692838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.698132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.698479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.698507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.703758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.704109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.704140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.709399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.709754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.709799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.714965] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.715316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.715346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.720494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.720831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.720893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.726025] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.726383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.726413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.731341] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.731654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.731686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.736882] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.737212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.737243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.742451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.742797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.742831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.748063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.748394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.748449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.753516] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.753814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.753851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.758818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.759118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.759145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.764243] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.764573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.764609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.769611] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.769909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.769941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.774932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.775227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.775260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.780219] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.780566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.780620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.785650] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.785976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.786009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.790986] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.791295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.791343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.996 [2024-07-15 19:35:27.796678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:37.996 [2024-07-15 19:35:27.797043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.996 [2024-07-15 19:35:27.797076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.255 [2024-07-15 19:35:27.802274] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.255 [2024-07-15 19:35:27.802606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.255 [2024-07-15 19:35:27.802639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.255 [2024-07-15 19:35:27.807655] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.255 [2024-07-15 19:35:27.807992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.255 [2024-07-15 19:35:27.808056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.255 [2024-07-15 19:35:27.813169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.255 [2024-07-15 19:35:27.813523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.255 [2024-07-15 19:35:27.813555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.255 [2024-07-15 19:35:27.818661] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.255 [2024-07-15 19:35:27.818995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.255 [2024-07-15 19:35:27.819029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.255 [2024-07-15 19:35:27.824290] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.255 [2024-07-15 19:35:27.824640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.255 [2024-07-15 19:35:27.824682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.255 [2024-07-15 19:35:27.829855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.255 [2024-07-15 19:35:27.830173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.255 [2024-07-15 19:35:27.830219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.255 [2024-07-15 19:35:27.835294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.255 [2024-07-15 19:35:27.835631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.255 [2024-07-15 19:35:27.835668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.255 [2024-07-15 19:35:27.840786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.255 [2024-07-15 19:35:27.841085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.255 [2024-07-15 19:35:27.841114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.255 [2024-07-15 19:35:27.846274] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.255 [2024-07-15 19:35:27.846596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.255 [2024-07-15 19:35:27.846628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.255 [2024-07-15 19:35:27.851740] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.852074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.852102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.857199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.857537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.857564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.862646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.862965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.862993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.868024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.868327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.868382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.873463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.873783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.873815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.878829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.879152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.879184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.884285] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.884674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.884712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.889754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.890106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.890148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.895333] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.895656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.895684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.900533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.900868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.900894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.905977] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.906347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.906390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.911263] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.911613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.911648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.916576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.916908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.916936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.921857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.922165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.922193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.927196] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.927500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.927527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.932572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.932907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.932934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.938103] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.938455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.938483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.943684] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.943982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.944018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.949135] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.949478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.949502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.954778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.955091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.955120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.960123] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.960455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.960483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.965408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.965706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.965738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.970735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.971078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.971113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.976216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.976602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.976633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.981755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.982072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.982105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.987060] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.987395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.987425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.992421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.992772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.992800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:27.997871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:27.998178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:27.998232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:28.003479] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.256 [2024-07-15 19:35:28.003854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.256 [2024-07-15 19:35:28.003888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.256 [2024-07-15 19:35:28.009134] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.257 [2024-07-15 19:35:28.009484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.257 [2024-07-15 19:35:28.009513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.257 [2024-07-15 19:35:28.014608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.257 [2024-07-15 19:35:28.014906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.257 [2024-07-15 19:35:28.014936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.257 [2024-07-15 19:35:28.020136] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.257 [2024-07-15 19:35:28.020467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.257 [2024-07-15 19:35:28.020495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.257 [2024-07-15 19:35:28.025725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.257 [2024-07-15 19:35:28.026037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.257 [2024-07-15 19:35:28.026066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.257 [2024-07-15 19:35:28.031010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.257 [2024-07-15 19:35:28.031331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.257 [2024-07-15 19:35:28.031369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.257 [2024-07-15 19:35:28.036252] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.257 [2024-07-15 19:35:28.036562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.257 [2024-07-15 19:35:28.036595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.257 [2024-07-15 19:35:28.041604] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.257 [2024-07-15 19:35:28.041918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.257 [2024-07-15 19:35:28.041946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.257 [2024-07-15 19:35:28.047016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.257 [2024-07-15 19:35:28.047314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.257 [2024-07-15 19:35:28.047342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.257 [2024-07-15 19:35:28.052373] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.257 [2024-07-15 19:35:28.052682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.257 [2024-07-15 19:35:28.052720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.516 [2024-07-15 19:35:28.058205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.516 [2024-07-15 19:35:28.058540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.516 [2024-07-15 19:35:28.058572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.516 [2024-07-15 19:35:28.063817] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.516 [2024-07-15 19:35:28.064115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.516 [2024-07-15 19:35:28.064144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.516 [2024-07-15 19:35:28.069209] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.516 [2024-07-15 19:35:28.069520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.516 [2024-07-15 19:35:28.069548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.516 [2024-07-15 19:35:28.074699] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.516 [2024-07-15 19:35:28.075001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.516 [2024-07-15 19:35:28.075029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.516 [2024-07-15 19:35:28.080154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.516 [2024-07-15 19:35:28.080465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.516 [2024-07-15 19:35:28.080507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.516 [2024-07-15 19:35:28.085776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.516 [2024-07-15 19:35:28.086072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.516 [2024-07-15 19:35:28.086101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.516 [2024-07-15 19:35:28.091382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.516 [2024-07-15 19:35:28.091695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.516 [2024-07-15 19:35:28.091733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.516 [2024-07-15 19:35:28.096926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.516 [2024-07-15 19:35:28.097244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.516 [2024-07-15 19:35:28.097272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.516 [2024-07-15 19:35:28.102188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.516 [2024-07-15 19:35:28.102521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.516 [2024-07-15 19:35:28.102549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.516 [2024-07-15 19:35:28.107615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.516 [2024-07-15 19:35:28.107907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.516 [2024-07-15 19:35:28.107933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.516 [2024-07-15 19:35:28.113236] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.516 [2024-07-15 19:35:28.113562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.516 [2024-07-15 19:35:28.113591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.516 [2024-07-15 19:35:28.118831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.516 [2024-07-15 19:35:28.119154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.516 [2024-07-15 19:35:28.119202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.516 [2024-07-15 19:35:28.124439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.516 [2024-07-15 19:35:28.124772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.516 [2024-07-15 19:35:28.124811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.129989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.130298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.130326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.135505] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.135860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.135891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.141024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.141326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.141372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.146427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.146733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.146762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.152123] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.152435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.152464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.157735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.158047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.158076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.163149] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.163475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.163504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.168592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.168907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.168939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.173944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.174261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.174292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.179288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.179628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.179658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.184631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.184953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.184983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.189903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.190202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.190240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.195203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.195515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.195544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.200550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.200879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.200909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.206418] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.206796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.206829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.211874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.212197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.212233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.217256] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.217603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.217631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.222974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.223319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.223370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.228469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.228813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.228849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.233911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.234345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.234392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.239494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.239819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.239849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.244805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.245125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.245164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.250203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.250553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.250585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.255503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.255814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.255846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.260814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.261135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.261174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.266129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.266478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.266523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.271571] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.271906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.271935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.276881] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.277222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.277251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.282232] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.517 [2024-07-15 19:35:28.282573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.517 [2024-07-15 19:35:28.282608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.517 [2024-07-15 19:35:28.287674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.518 [2024-07-15 19:35:28.288040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.518 [2024-07-15 19:35:28.288078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.518 [2024-07-15 19:35:28.293222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.518 [2024-07-15 19:35:28.293629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.518 [2024-07-15 19:35:28.293664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.518 [2024-07-15 19:35:28.298960] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.518 [2024-07-15 19:35:28.299397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.518 [2024-07-15 19:35:28.299434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.518 [2024-07-15 19:35:28.304589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.518 [2024-07-15 19:35:28.304951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.518 [2024-07-15 19:35:28.304988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.518 [2024-07-15 19:35:28.310022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.518 [2024-07-15 19:35:28.310380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.518 [2024-07-15 19:35:28.310414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.518 [2024-07-15 19:35:28.315390] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.518 [2024-07-15 19:35:28.315725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.518 [2024-07-15 19:35:28.315754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.777 [2024-07-15 19:35:28.320799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.777 [2024-07-15 19:35:28.321099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.777 [2024-07-15 19:35:28.321129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.777 [2024-07-15 19:35:28.326166] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.777 [2024-07-15 19:35:28.326502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.777 [2024-07-15 19:35:28.326530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.777 [2024-07-15 19:35:28.331635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.777 [2024-07-15 19:35:28.331944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.777 [2024-07-15 19:35:28.331971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.777 [2024-07-15 19:35:28.336903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.777 [2024-07-15 19:35:28.337201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.777 [2024-07-15 19:35:28.337229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.777 [2024-07-15 19:35:28.342939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.777 [2024-07-15 19:35:28.343253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.777 [2024-07-15 19:35:28.343283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.777 [2024-07-15 19:35:28.348309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.777 [2024-07-15 19:35:28.348626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.777 [2024-07-15 19:35:28.348658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.777 [2024-07-15 19:35:28.353591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.777 [2024-07-15 19:35:28.353920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.777 [2024-07-15 19:35:28.353948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.777 [2024-07-15 19:35:28.358904] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.777 [2024-07-15 19:35:28.359232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.777 [2024-07-15 19:35:28.359260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.777 [2024-07-15 19:35:28.364256] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.777 [2024-07-15 19:35:28.364585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.777 [2024-07-15 19:35:28.364611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.777 [2024-07-15 19:35:28.369644] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.777 [2024-07-15 19:35:28.369990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.777 [2024-07-15 19:35:28.370023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.777 [2024-07-15 19:35:28.374971] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.777 [2024-07-15 19:35:28.375272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.777 [2024-07-15 19:35:28.375300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.777 [2024-07-15 19:35:28.380275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.777 [2024-07-15 19:35:28.380588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.777 [2024-07-15 19:35:28.380621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.385578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.385905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.385934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.390978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.391305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.391333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.396338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.396673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.396707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.401885] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.402243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.402276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.407208] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.407537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.407565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.412507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.412804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.412834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.417785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.418082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.418110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.423040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.423354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.423393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.428262] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.428572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.428605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.433594] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.433921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.433948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.438898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.439193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.439222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.444269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.444629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.444662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.449706] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.450035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.450065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.455281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.455605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.455635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.460692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.461005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.461033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.466090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.466456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.466489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.471484] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.471811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.471851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.476871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.477189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.477222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.482203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.482556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.482586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.487679] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.488060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.488090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.493243] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.493568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.493616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.498739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.499038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.499067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.504226] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.504559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.504587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.509854] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.510152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.510181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.515252] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.515572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.515600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.520582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.520896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.520924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.525974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.526325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.526353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.531343] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.531701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.531740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.536686] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.778 [2024-07-15 19:35:28.536996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.778 [2024-07-15 19:35:28.537022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.778 [2024-07-15 19:35:28.541911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.779 [2024-07-15 19:35:28.542245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.779 [2024-07-15 19:35:28.542274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.779 [2024-07-15 19:35:28.547173] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.779 [2024-07-15 19:35:28.547521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.779 [2024-07-15 19:35:28.547552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.779 [2024-07-15 19:35:28.552523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.779 [2024-07-15 19:35:28.552865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.779 [2024-07-15 19:35:28.552907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.779 [2024-07-15 19:35:28.557845] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.779 [2024-07-15 19:35:28.558137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.779 [2024-07-15 19:35:28.558164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.779 [2024-07-15 19:35:28.563297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.779 [2024-07-15 19:35:28.563699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.779 [2024-07-15 19:35:28.563736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.779 [2024-07-15 19:35:28.568665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.779 [2024-07-15 19:35:28.568981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.779 [2024-07-15 19:35:28.569007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.779 [2024-07-15 19:35:28.573936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:38.779 [2024-07-15 19:35:28.574262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.779 [2024-07-15 19:35:28.574289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.037 [2024-07-15 19:35:28.579774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:39.037 [2024-07-15 19:35:28.580134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.037 [2024-07-15 19:35:28.580163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.037 [2024-07-15 19:35:28.585371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:39.037 [2024-07-15 19:35:28.585719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.037 [2024-07-15 19:35:28.585745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:39.037 [2024-07-15 19:35:28.590682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:39.037 [2024-07-15 19:35:28.591002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.037 [2024-07-15 19:35:28.591040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:39.037 [2024-07-15 19:35:28.596212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:39.037 [2024-07-15 19:35:28.596533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.037 [2024-07-15 19:35:28.596569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.037 [2024-07-15 19:35:28.601646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:39.037 [2024-07-15 19:35:28.601973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.037 [2024-07-15 19:35:28.602017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.037 [2024-07-15 19:35:28.607216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:39.037 [2024-07-15 19:35:28.607526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.037 [2024-07-15 19:35:28.607553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:39.037 [2024-07-15 19:35:28.612656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:39.037 [2024-07-15 19:35:28.612967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.037 [2024-07-15 19:35:28.612995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:39.037 [2024-07-15 19:35:28.618070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:39.037 [2024-07-15 19:35:28.618406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.037 [2024-07-15 19:35:28.618435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.037 [2024-07-15 19:35:28.623577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:39.037 [2024-07-15 19:35:28.623920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.038 [2024-07-15 19:35:28.623951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.038 [2024-07-15 19:35:28.629116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:39.038 [2024-07-15 19:35:28.629435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.038 [2024-07-15 19:35:28.629464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:39.038 [2024-07-15 19:35:28.634718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:39.038 [2024-07-15 19:35:28.635041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.038 [2024-07-15 19:35:28.635069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:39.038 [2024-07-15 19:35:28.640172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:39.038 [2024-07-15 19:35:28.640496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.038 [2024-07-15 19:35:28.640524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.038 [2024-07-15 19:35:28.645555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:39.038 [2024-07-15 19:35:28.645893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.038 [2024-07-15 19:35:28.645921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.038 [2024-07-15 19:35:28.651006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:39.038 [2024-07-15 19:35:28.651325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.038 [2024-07-15 19:35:28.651353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:39.038 [2024-07-15 19:35:28.656487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:39.038 [2024-07-15 19:35:28.656816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.038 [2024-07-15 19:35:28.656845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:39.038 [2024-07-15 19:35:28.661941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:39.038 [2024-07-15 19:35:28.662284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.038 [2024-07-15 19:35:28.662313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.038 [2024-07-15 19:35:28.667289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:39.038 [2024-07-15 19:35:28.667649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.038 [2024-07-15 19:35:28.667688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.038 [2024-07-15 19:35:28.672745] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f2d00) with pdu=0x2000190fef90 00:19:39.038 [2024-07-15 19:35:28.673069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.038 [2024-07-15 19:35:28.673096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:39.038 00:19:39.038 Latency(us) 00:19:39.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.038 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:39.038 nvme0n1 : 2.00 5723.19 715.40 0.00 0.00 2789.43 2159.71 7506.85 00:19:39.038 =================================================================================================================== 00:19:39.038 Total : 5723.19 715.40 0.00 0.00 2789.43 2159.71 7506.85 00:19:39.038 0 00:19:39.038 19:35:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:39.038 19:35:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:39.038 19:35:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:39.038 19:35:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:39.038 | .driver_specific 00:19:39.038 | .nvme_error 00:19:39.038 | .status_code 00:19:39.038 | .command_transient_transport_error' 00:19:39.342 19:35:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 369 > 0 )) 00:19:39.342 19:35:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93687 00:19:39.342 19:35:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93687 ']' 00:19:39.342 19:35:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93687 00:19:39.342 19:35:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:39.342 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:39.342 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93687 00:19:39.342 killing process with pid 93687 00:19:39.342 Received shutdown signal, test time was about 2.000000 seconds 00:19:39.342 00:19:39.342 Latency(us) 00:19:39.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.342 =================================================================================================================== 00:19:39.342 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:39.342 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:39.342 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:39.342 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93687' 00:19:39.342 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93687 00:19:39.342 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93687 00:19:39.600 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93435 00:19:39.600 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93435 ']' 00:19:39.600 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93435 00:19:39.600 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:39.600 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:39.600 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93435 00:19:39.600 killing process with pid 93435 00:19:39.600 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:39.600 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:39.600 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93435' 00:19:39.600 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93435 00:19:39.600 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93435 00:19:39.600 00:19:39.600 real 0m14.889s 00:19:39.600 user 0m28.580s 00:19:39.600 sys 0m4.174s 00:19:39.600 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:39.600 19:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:39.600 ************************************ 00:19:39.600 END TEST nvmf_digest_error 00:19:39.600 ************************************ 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:39.858 rmmod nvme_tcp 00:19:39.858 rmmod nvme_fabrics 00:19:39.858 rmmod nvme_keyring 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 93435 ']' 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 93435 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 93435 ']' 00:19:39.858 Process with pid 93435 is not found 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 93435 00:19:39.858 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (93435) - No such process 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 93435 is not found' 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:39.858 ************************************ 00:19:39.858 END TEST nvmf_digest 00:19:39.858 ************************************ 00:19:39.858 00:19:39.858 real 0m33.590s 00:19:39.858 user 1m3.522s 00:19:39.858 sys 0m8.841s 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:39.858 19:35:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:39.858 19:35:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:39.858 19:35:29 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 00:19:39.858 19:35:29 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:19:39.858 19:35:29 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:19:39.858 19:35:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:39.858 19:35:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:39.858 19:35:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:39.858 ************************************ 00:19:39.858 START TEST nvmf_mdns_discovery 00:19:39.858 ************************************ 00:19:39.858 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:19:40.117 * Looking for test storage... 00:19:40.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:40.117 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:40.118 Cannot find device "nvmf_tgt_br" 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:40.118 Cannot find device "nvmf_tgt_br2" 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:40.118 Cannot find device "nvmf_tgt_br" 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:40.118 Cannot find device "nvmf_tgt_br2" 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:40.118 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:40.118 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:40.118 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:40.376 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:40.376 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:40.376 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:40.376 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:40.376 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:40.376 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:40.376 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:40.376 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:40.376 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:40.376 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:40.376 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:40.376 19:35:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:40.376 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:40.376 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:40.376 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:40.376 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:40.376 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:40.376 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:40.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:19:40.376 00:19:40.377 --- 10.0.0.2 ping statistics --- 00:19:40.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.377 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:40.377 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:40.377 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:19:40.377 00:19:40.377 --- 10.0.0.3 ping statistics --- 00:19:40.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.377 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:40.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:19:40.377 00:19:40.377 --- 10.0.0.1 ping statistics --- 00:19:40.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.377 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=93961 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 93961 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 93961 ']' 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.377 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.377 [2024-07-15 19:35:30.156266] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:19:40.377 [2024-07-15 19:35:30.156403] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.634 [2024-07-15 19:35:30.298639] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.634 [2024-07-15 19:35:30.369184] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.634 [2024-07-15 19:35:30.369244] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.634 [2024-07-15 19:35:30.369259] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.634 [2024-07-15 19:35:30.369269] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.634 [2024-07-15 19:35:30.369278] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.634 [2024-07-15 19:35:30.369312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.634 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:40.634 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:19:40.634 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:40.634 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:40.634 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.893 [2024-07-15 19:35:30.533928] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.893 [2024-07-15 19:35:30.542055] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.893 null0 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.893 null1 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.893 null2 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.893 null3 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.893 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94003 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94003 /tmp/host.sock 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94003 ']' 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.893 19:35:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.893 [2024-07-15 19:35:30.647505] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:19:40.893 [2024-07-15 19:35:30.647600] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94003 ] 00:19:41.151 [2024-07-15 19:35:30.788393] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.151 [2024-07-15 19:35:30.848963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.087 19:35:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:42.087 19:35:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:19:42.087 19:35:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:19:42.087 19:35:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:19:42.087 19:35:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:19:42.087 19:35:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94031 00:19:42.087 19:35:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:19:42.087 19:35:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:19:42.087 19:35:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:19:42.087 Process 982 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:19:42.087 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:19:42.087 Successfully dropped root privileges. 00:19:42.087 avahi-daemon 0.8 starting up. 00:19:42.087 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:19:43.022 Successfully called chroot(). 00:19:43.022 Successfully dropped remaining capabilities. 00:19:43.022 No service file found in /etc/avahi/services. 00:19:43.022 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:19:43.022 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:19:43.022 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:19:43.022 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:19:43.022 Network interface enumeration completed. 00:19:43.022 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:19:43.022 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:19:43.022 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:19:43.022 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:19:43.022 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 3833492588. 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:43.022 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:43.282 19:35:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:43.282 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.282 [2024-07-15 19:35:33.031649] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:19:43.282 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:19:43.282 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:19:43.282 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.282 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:43.282 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:43.282 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:43.282 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.282 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.282 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.541 [2024-07-15 19:35:33.102833] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.541 [2024-07-15 19:35:33.142776] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.541 [2024-07-15 19:35:33.150734] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.541 19:35:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:19:44.475 [2024-07-15 19:35:33.931644] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:19:44.733 [2024-07-15 19:35:34.531685] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:44.733 [2024-07-15 19:35:34.531741] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:44.733 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:44.733 cookie is 0 00:19:44.733 is_local: 1 00:19:44.733 our_own: 0 00:19:44.733 wide_area: 0 00:19:44.733 multicast: 1 00:19:44.733 cached: 1 00:19:44.991 [2024-07-15 19:35:34.631661] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:44.991 [2024-07-15 19:35:34.631711] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:44.991 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:44.991 cookie is 0 00:19:44.991 is_local: 1 00:19:44.991 our_own: 0 00:19:44.991 wide_area: 0 00:19:44.991 multicast: 1 00:19:44.991 cached: 1 00:19:44.991 [2024-07-15 19:35:34.631727] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:19:44.991 [2024-07-15 19:35:34.731666] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:44.991 [2024-07-15 19:35:34.731715] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:44.991 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:44.991 cookie is 0 00:19:44.991 is_local: 1 00:19:44.991 our_own: 0 00:19:44.991 wide_area: 0 00:19:44.991 multicast: 1 00:19:44.991 cached: 1 00:19:45.292 [2024-07-15 19:35:34.831666] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:45.292 [2024-07-15 19:35:34.831717] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:45.292 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:45.292 cookie is 0 00:19:45.292 is_local: 1 00:19:45.292 our_own: 0 00:19:45.292 wide_area: 0 00:19:45.292 multicast: 1 00:19:45.292 cached: 1 00:19:45.292 [2024-07-15 19:35:34.831733] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:19:45.877 [2024-07-15 19:35:35.538714] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:45.877 [2024-07-15 19:35:35.538749] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:45.877 [2024-07-15 19:35:35.538801] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:45.877 [2024-07-15 19:35:35.624873] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:19:46.136 [2024-07-15 19:35:35.681816] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:19:46.136 [2024-07-15 19:35:35.681847] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:19:46.136 [2024-07-15 19:35:35.738713] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:46.136 [2024-07-15 19:35:35.738753] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:46.136 [2024-07-15 19:35:35.738788] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:46.136 [2024-07-15 19:35:35.824862] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:19:46.136 [2024-07-15 19:35:35.881120] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:19:46.136 [2024-07-15 19:35:35.881169] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:48.669 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.928 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:19:48.928 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:19:48.928 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:48.928 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:48.928 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.928 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.928 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.928 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:19:48.928 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:19:48.928 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:19:48.928 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:48.928 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.928 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.928 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.928 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:19:48.928 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.928 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.928 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.928 19:35:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:19:49.863 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:19:49.863 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:49.863 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.863 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:49.863 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.863 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:49.863 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:49.863 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.863 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:49.863 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:19:49.863 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:49.863 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:49.863 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.863 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.122 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.122 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:19:50.122 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:19:50.122 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:19:50.122 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:19:50.122 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.122 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.122 [2024-07-15 19:35:39.714111] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:50.122 [2024-07-15 19:35:39.714582] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:50.122 [2024-07-15 19:35:39.714627] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:50.122 [2024-07-15 19:35:39.714669] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:50.122 [2024-07-15 19:35:39.714684] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:50.122 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.122 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:19:50.122 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.122 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.122 [2024-07-15 19:35:39.722085] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:50.122 [2024-07-15 19:35:39.722620] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:50.122 [2024-07-15 19:35:39.722689] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:50.122 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.122 19:35:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:19:50.122 [2024-07-15 19:35:39.852724] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:19:50.122 [2024-07-15 19:35:39.853722] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:19:50.122 [2024-07-15 19:35:39.912955] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:19:50.122 [2024-07-15 19:35:39.913002] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:50.122 [2024-07-15 19:35:39.913026] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:50.122 [2024-07-15 19:35:39.913047] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:50.122 [2024-07-15 19:35:39.913090] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:19:50.122 [2024-07-15 19:35:39.913099] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:19:50.122 [2024-07-15 19:35:39.913104] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:50.122 [2024-07-15 19:35:39.913118] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:50.382 [2024-07-15 19:35:39.958864] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:19:50.382 [2024-07-15 19:35:39.958907] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:50.382 [2024-07-15 19:35:39.958971] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:50.382 [2024-07-15 19:35:39.958981] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:50.948 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:19:50.948 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:50.949 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:50.949 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:50.949 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.949 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.949 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:51.208 19:35:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.469 19:35:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:19:51.469 19:35:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:19:51.469 19:35:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:19:51.469 19:35:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:51.469 19:35:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.469 19:35:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:51.469 [2024-07-15 19:35:41.043528] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:51.469 [2024-07-15 19:35:41.043584] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:51.469 [2024-07-15 19:35:41.043624] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:51.469 [2024-07-15 19:35:41.043639] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:51.469 19:35:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.469 19:35:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:19:51.469 19:35:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.469 19:35:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:51.469 [2024-07-15 19:35:41.049989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.469 [2024-07-15 19:35:41.050046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.469 [2024-07-15 19:35:41.050061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.469 [2024-07-15 19:35:41.050071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.469 [2024-07-15 19:35:41.050082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.469 [2024-07-15 19:35:41.050091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.469 [2024-07-15 19:35:41.050101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.469 [2024-07-15 19:35:41.050109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.469 [2024-07-15 19:35:41.050119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5080 is same with the state(5) to be set 00:19:51.469 [2024-07-15 19:35:41.051516] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:51.469 [2024-07-15 19:35:41.051577] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:51.469 [2024-07-15 19:35:41.053348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.469 [2024-07-15 19:35:41.053423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.469 [2024-07-15 19:35:41.053437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.469 [2024-07-15 19:35:41.053446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.469 [2024-07-15 19:35:41.053457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.469 [2024-07-15 19:35:41.053466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.469 [2024-07-15 19:35:41.053476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.469 [2024-07-15 19:35:41.053485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.469 [2024-07-15 19:35:41.053495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691300 is same with the state(5) to be set 00:19:51.470 19:35:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.470 19:35:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:19:51.470 [2024-07-15 19:35:41.059944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5080 (9): Bad file descriptor 00:19:51.470 [2024-07-15 19:35:41.063316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x691300 (9): Bad file descriptor 00:19:51.470 [2024-07-15 19:35:41.069963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.470 [2024-07-15 19:35:41.070084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.470 [2024-07-15 19:35:41.070108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5080 with addr=10.0.0.2, port=4420 00:19:51.470 [2024-07-15 19:35:41.070120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5080 is same with the state(5) to be set 00:19:51.470 [2024-07-15 19:35:41.070137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5080 (9): Bad file descriptor 00:19:51.470 [2024-07-15 19:35:41.070152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.470 [2024-07-15 19:35:41.070161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.470 [2024-07-15 19:35:41.070172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.470 [2024-07-15 19:35:41.070189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.470 [2024-07-15 19:35:41.073326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.470 [2024-07-15 19:35:41.073456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.470 [2024-07-15 19:35:41.073479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x691300 with addr=10.0.0.3, port=4420 00:19:51.470 [2024-07-15 19:35:41.073490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691300 is same with the state(5) to be set 00:19:51.470 [2024-07-15 19:35:41.073507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x691300 (9): Bad file descriptor 00:19:51.470 [2024-07-15 19:35:41.073522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.470 [2024-07-15 19:35:41.073531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.470 [2024-07-15 19:35:41.073540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.470 [2024-07-15 19:35:41.073555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.470 [2024-07-15 19:35:41.080025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.470 [2024-07-15 19:35:41.080111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.470 [2024-07-15 19:35:41.080132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5080 with addr=10.0.0.2, port=4420 00:19:51.470 [2024-07-15 19:35:41.080143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5080 is same with the state(5) to be set 00:19:51.470 [2024-07-15 19:35:41.080159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5080 (9): Bad file descriptor 00:19:51.470 [2024-07-15 19:35:41.080174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.470 [2024-07-15 19:35:41.080183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.470 [2024-07-15 19:35:41.080192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.470 [2024-07-15 19:35:41.080207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.470 [2024-07-15 19:35:41.083426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.470 [2024-07-15 19:35:41.083540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.470 [2024-07-15 19:35:41.083561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x691300 with addr=10.0.0.3, port=4420 00:19:51.470 [2024-07-15 19:35:41.083571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691300 is same with the state(5) to be set 00:19:51.470 [2024-07-15 19:35:41.083588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x691300 (9): Bad file descriptor 00:19:51.470 [2024-07-15 19:35:41.083621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.470 [2024-07-15 19:35:41.083632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.470 [2024-07-15 19:35:41.083642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.470 [2024-07-15 19:35:41.083656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.470 [2024-07-15 19:35:41.090080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.470 [2024-07-15 19:35:41.090161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.470 [2024-07-15 19:35:41.090182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5080 with addr=10.0.0.2, port=4420 00:19:51.470 [2024-07-15 19:35:41.090194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5080 is same with the state(5) to be set 00:19:51.470 [2024-07-15 19:35:41.090220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5080 (9): Bad file descriptor 00:19:51.470 [2024-07-15 19:35:41.090236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.470 [2024-07-15 19:35:41.090244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.470 [2024-07-15 19:35:41.090254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.470 [2024-07-15 19:35:41.090269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.470 [2024-07-15 19:35:41.093492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.470 [2024-07-15 19:35:41.093601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.470 [2024-07-15 19:35:41.093622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x691300 with addr=10.0.0.3, port=4420 00:19:51.470 [2024-07-15 19:35:41.093632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691300 is same with the state(5) to be set 00:19:51.470 [2024-07-15 19:35:41.093662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x691300 (9): Bad file descriptor 00:19:51.470 [2024-07-15 19:35:41.093704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.470 [2024-07-15 19:35:41.093717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.470 [2024-07-15 19:35:41.093726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.470 [2024-07-15 19:35:41.093740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.470 [2024-07-15 19:35:41.100135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.470 [2024-07-15 19:35:41.100286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.470 [2024-07-15 19:35:41.100308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5080 with addr=10.0.0.2, port=4420 00:19:51.470 [2024-07-15 19:35:41.100319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5080 is same with the state(5) to be set 00:19:51.470 [2024-07-15 19:35:41.100335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5080 (9): Bad file descriptor 00:19:51.470 [2024-07-15 19:35:41.100349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.470 [2024-07-15 19:35:41.100359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.470 [2024-07-15 19:35:41.100368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.470 [2024-07-15 19:35:41.100411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.470 [2024-07-15 19:35:41.103556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.470 [2024-07-15 19:35:41.103669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.470 [2024-07-15 19:35:41.103690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x691300 with addr=10.0.0.3, port=4420 00:19:51.470 [2024-07-15 19:35:41.103701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691300 is same with the state(5) to be set 00:19:51.470 [2024-07-15 19:35:41.103716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x691300 (9): Bad file descriptor 00:19:51.471 [2024-07-15 19:35:41.103749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.471 [2024-07-15 19:35:41.103761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.471 [2024-07-15 19:35:41.103770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.471 [2024-07-15 19:35:41.103785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.471 [2024-07-15 19:35:41.110237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.471 [2024-07-15 19:35:41.110322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.471 [2024-07-15 19:35:41.110343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5080 with addr=10.0.0.2, port=4420 00:19:51.471 [2024-07-15 19:35:41.110353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5080 is same with the state(5) to be set 00:19:51.471 [2024-07-15 19:35:41.110382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5080 (9): Bad file descriptor 00:19:51.471 [2024-07-15 19:35:41.110397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.471 [2024-07-15 19:35:41.110406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.471 [2024-07-15 19:35:41.110415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.471 [2024-07-15 19:35:41.110430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.471 [2024-07-15 19:35:41.113622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.471 [2024-07-15 19:35:41.113732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.471 [2024-07-15 19:35:41.113752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x691300 with addr=10.0.0.3, port=4420 00:19:51.471 [2024-07-15 19:35:41.113763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691300 is same with the state(5) to be set 00:19:51.471 [2024-07-15 19:35:41.113788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x691300 (9): Bad file descriptor 00:19:51.471 [2024-07-15 19:35:41.113822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.471 [2024-07-15 19:35:41.113834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.471 [2024-07-15 19:35:41.113843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.471 [2024-07-15 19:35:41.113858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.471 [2024-07-15 19:35:41.120292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.471 [2024-07-15 19:35:41.120432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.471 [2024-07-15 19:35:41.120453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5080 with addr=10.0.0.2, port=4420 00:19:51.471 [2024-07-15 19:35:41.120463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5080 is same with the state(5) to be set 00:19:51.471 [2024-07-15 19:35:41.120479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5080 (9): Bad file descriptor 00:19:51.471 [2024-07-15 19:35:41.120493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.471 [2024-07-15 19:35:41.120501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.471 [2024-07-15 19:35:41.120510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.471 [2024-07-15 19:35:41.120525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.471 [2024-07-15 19:35:41.123687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.471 [2024-07-15 19:35:41.123815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.471 [2024-07-15 19:35:41.123836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x691300 with addr=10.0.0.3, port=4420 00:19:51.471 [2024-07-15 19:35:41.123846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691300 is same with the state(5) to be set 00:19:51.471 [2024-07-15 19:35:41.123862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x691300 (9): Bad file descriptor 00:19:51.471 [2024-07-15 19:35:41.123895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.471 [2024-07-15 19:35:41.123907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.471 [2024-07-15 19:35:41.123916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.471 [2024-07-15 19:35:41.123931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.471 [2024-07-15 19:35:41.130380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.471 [2024-07-15 19:35:41.130463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.471 [2024-07-15 19:35:41.130484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5080 with addr=10.0.0.2, port=4420 00:19:51.471 [2024-07-15 19:35:41.130495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5080 is same with the state(5) to be set 00:19:51.471 [2024-07-15 19:35:41.130517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5080 (9): Bad file descriptor 00:19:51.471 [2024-07-15 19:35:41.130532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.471 [2024-07-15 19:35:41.130541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.471 [2024-07-15 19:35:41.130550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.471 [2024-07-15 19:35:41.130564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.471 [2024-07-15 19:35:41.133754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.471 [2024-07-15 19:35:41.133873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.471 [2024-07-15 19:35:41.133893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x691300 with addr=10.0.0.3, port=4420 00:19:51.471 [2024-07-15 19:35:41.133904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691300 is same with the state(5) to be set 00:19:51.471 [2024-07-15 19:35:41.133921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x691300 (9): Bad file descriptor 00:19:51.471 [2024-07-15 19:35:41.133954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.471 [2024-07-15 19:35:41.133966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.471 [2024-07-15 19:35:41.133976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.471 [2024-07-15 19:35:41.133991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.471 [2024-07-15 19:35:41.140437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.471 [2024-07-15 19:35:41.140602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.471 [2024-07-15 19:35:41.140623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5080 with addr=10.0.0.2, port=4420 00:19:51.471 [2024-07-15 19:35:41.140634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5080 is same with the state(5) to be set 00:19:51.471 [2024-07-15 19:35:41.140650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5080 (9): Bad file descriptor 00:19:51.471 [2024-07-15 19:35:41.140664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.471 [2024-07-15 19:35:41.140672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.471 [2024-07-15 19:35:41.140681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.471 [2024-07-15 19:35:41.140696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.471 [2024-07-15 19:35:41.143846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.471 [2024-07-15 19:35:41.143935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.471 [2024-07-15 19:35:41.143957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x691300 with addr=10.0.0.3, port=4420 00:19:51.471 [2024-07-15 19:35:41.143968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691300 is same with the state(5) to be set 00:19:51.471 [2024-07-15 19:35:41.143986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x691300 (9): Bad file descriptor 00:19:51.471 [2024-07-15 19:35:41.144019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.471 [2024-07-15 19:35:41.144031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.472 [2024-07-15 19:35:41.144040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.472 [2024-07-15 19:35:41.144055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.472 [2024-07-15 19:35:41.150527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.472 [2024-07-15 19:35:41.150636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.472 [2024-07-15 19:35:41.150658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5080 with addr=10.0.0.2, port=4420 00:19:51.472 [2024-07-15 19:35:41.150669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5080 is same with the state(5) to be set 00:19:51.472 [2024-07-15 19:35:41.150686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5080 (9): Bad file descriptor 00:19:51.472 [2024-07-15 19:35:41.150701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.472 [2024-07-15 19:35:41.150725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.472 [2024-07-15 19:35:41.150734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.472 [2024-07-15 19:35:41.150749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.472 [2024-07-15 19:35:41.153902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.472 [2024-07-15 19:35:41.153999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.472 [2024-07-15 19:35:41.154018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x691300 with addr=10.0.0.3, port=4420 00:19:51.472 [2024-07-15 19:35:41.154029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691300 is same with the state(5) to be set 00:19:51.472 [2024-07-15 19:35:41.154045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x691300 (9): Bad file descriptor 00:19:51.472 [2024-07-15 19:35:41.154077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.472 [2024-07-15 19:35:41.154089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.472 [2024-07-15 19:35:41.154098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.472 [2024-07-15 19:35:41.154113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.472 [2024-07-15 19:35:41.160595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.472 [2024-07-15 19:35:41.160695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.472 [2024-07-15 19:35:41.160726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5080 with addr=10.0.0.2, port=4420 00:19:51.472 [2024-07-15 19:35:41.160737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5080 is same with the state(5) to be set 00:19:51.472 [2024-07-15 19:35:41.160752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5080 (9): Bad file descriptor 00:19:51.472 [2024-07-15 19:35:41.160767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.472 [2024-07-15 19:35:41.160775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.472 [2024-07-15 19:35:41.160784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.472 [2024-07-15 19:35:41.160799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.472 [2024-07-15 19:35:41.163968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.472 [2024-07-15 19:35:41.164078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.472 [2024-07-15 19:35:41.164098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x691300 with addr=10.0.0.3, port=4420 00:19:51.472 [2024-07-15 19:35:41.164109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691300 is same with the state(5) to be set 00:19:51.472 [2024-07-15 19:35:41.164124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x691300 (9): Bad file descriptor 00:19:51.472 [2024-07-15 19:35:41.164156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.472 [2024-07-15 19:35:41.164167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.472 [2024-07-15 19:35:41.164176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.472 [2024-07-15 19:35:41.164190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.472 [2024-07-15 19:35:41.170666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.472 [2024-07-15 19:35:41.170757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.472 [2024-07-15 19:35:41.170779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5080 with addr=10.0.0.2, port=4420 00:19:51.472 [2024-07-15 19:35:41.170790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5080 is same with the state(5) to be set 00:19:51.472 [2024-07-15 19:35:41.170806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5080 (9): Bad file descriptor 00:19:51.472 [2024-07-15 19:35:41.170820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.472 [2024-07-15 19:35:41.170829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.472 [2024-07-15 19:35:41.170838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.472 [2024-07-15 19:35:41.170853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.472 [2024-07-15 19:35:41.174035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.472 [2024-07-15 19:35:41.174123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.472 [2024-07-15 19:35:41.174143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x691300 with addr=10.0.0.3, port=4420 00:19:51.472 [2024-07-15 19:35:41.174154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x691300 is same with the state(5) to be set 00:19:51.472 [2024-07-15 19:35:41.174171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x691300 (9): Bad file descriptor 00:19:51.472 [2024-07-15 19:35:41.174204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.472 [2024-07-15 19:35:41.174226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.472 [2024-07-15 19:35:41.174236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.472 [2024-07-15 19:35:41.174251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.472 [2024-07-15 19:35:41.180728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.472 [2024-07-15 19:35:41.180826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.472 [2024-07-15 19:35:41.180848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5080 with addr=10.0.0.2, port=4420 00:19:51.472 [2024-07-15 19:35:41.180858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5080 is same with the state(5) to be set 00:19:51.472 [2024-07-15 19:35:41.180875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5080 (9): Bad file descriptor 00:19:51.472 [2024-07-15 19:35:41.180889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.472 [2024-07-15 19:35:41.180898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.472 [2024-07-15 19:35:41.180907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.472 [2024-07-15 19:35:41.180922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.472 [2024-07-15 19:35:41.183225] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:19:51.472 [2024-07-15 19:35:41.183260] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:51.472 [2024-07-15 19:35:41.183298] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:51.472 [2024-07-15 19:35:41.183338] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:19:51.472 [2024-07-15 19:35:41.183355] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:51.472 [2024-07-15 19:35:41.183386] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:51.473 [2024-07-15 19:35:41.269407] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:51.473 [2024-07-15 19:35:41.269515] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:52.408 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.668 19:35:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:19:52.668 [2024-07-15 19:35:42.431667] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:19:53.603 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:19:53.603 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:19:53.603 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:19:53.603 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:19:53.603 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:19:53.603 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.603 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:53.603 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:53.861 [2024-07-15 19:35:43.590638] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:19:53.861 2024/07/15 19:35:43 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:19:53.861 request: 00:19:53.861 { 00:19:53.861 "method": "bdev_nvme_start_mdns_discovery", 00:19:53.861 "params": { 00:19:53.861 "name": "mdns", 00:19:53.861 "svcname": "_nvme-disc._http", 00:19:53.861 "hostnqn": "nqn.2021-12.io.spdk:test" 00:19:53.861 } 00:19:53.861 } 00:19:53.861 Got JSON-RPC error response 00:19:53.861 GoRPCClient: error on JSON-RPC call 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:53.861 19:35:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:19:54.427 [2024-07-15 19:35:44.179479] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:19:54.686 [2024-07-15 19:35:44.279468] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:19:54.686 [2024-07-15 19:35:44.379490] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:54.686 [2024-07-15 19:35:44.379536] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:54.686 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:54.686 cookie is 0 00:19:54.686 is_local: 1 00:19:54.686 our_own: 0 00:19:54.686 wide_area: 0 00:19:54.686 multicast: 1 00:19:54.686 cached: 1 00:19:54.686 [2024-07-15 19:35:44.479498] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:54.686 [2024-07-15 19:35:44.479552] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:54.686 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:54.686 cookie is 0 00:19:54.686 is_local: 1 00:19:54.686 our_own: 0 00:19:54.686 wide_area: 0 00:19:54.686 multicast: 1 00:19:54.686 cached: 1 00:19:54.686 [2024-07-15 19:35:44.479586] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:19:54.945 [2024-07-15 19:35:44.579488] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:54.945 [2024-07-15 19:35:44.579534] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:54.945 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:54.945 cookie is 0 00:19:54.945 is_local: 1 00:19:54.945 our_own: 0 00:19:54.945 wide_area: 0 00:19:54.945 multicast: 1 00:19:54.945 cached: 1 00:19:54.945 [2024-07-15 19:35:44.679492] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:54.945 [2024-07-15 19:35:44.679543] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:54.945 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:54.945 cookie is 0 00:19:54.945 is_local: 1 00:19:54.945 our_own: 0 00:19:54.945 wide_area: 0 00:19:54.945 multicast: 1 00:19:54.945 cached: 1 00:19:54.945 [2024-07-15 19:35:44.679560] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:19:55.881 [2024-07-15 19:35:45.390749] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:55.881 [2024-07-15 19:35:45.390787] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:55.881 [2024-07-15 19:35:45.390807] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:55.881 [2024-07-15 19:35:45.476890] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:19:55.881 [2024-07-15 19:35:45.537165] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:19:55.881 [2024-07-15 19:35:45.537212] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:55.881 [2024-07-15 19:35:45.590687] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:55.881 [2024-07-15 19:35:45.590716] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:55.881 [2024-07-15 19:35:45.590751] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:55.881 [2024-07-15 19:35:45.676841] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:19:56.162 [2024-07-15 19:35:45.737072] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:19:56.162 [2024-07-15 19:35:45.737120] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:59.503 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:19:59.503 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:19:59.503 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:19:59.503 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.503 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.503 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:19:59.503 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:19:59.503 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.504 [2024-07-15 19:35:48.786782] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:19:59.504 2024/07/15 19:35:48 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:19:59.504 request: 00:19:59.504 { 00:19:59.504 "method": "bdev_nvme_start_mdns_discovery", 00:19:59.504 "params": { 00:19:59.504 "name": "cdc", 00:19:59.504 "svcname": "_nvme-disc._tcp", 00:19:59.504 "hostnqn": "nqn.2021-12.io.spdk:test" 00:19:59.504 } 00:19:59.504 } 00:19:59.504 Got JSON-RPC error response 00:19:59.504 GoRPCClient: error on JSON-RPC call 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 94003 00:19:59.504 19:35:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 94003 00:19:59.504 [2024-07-15 19:35:48.991006] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 94031 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:19:59.504 Got SIGTERM, quitting. 00:19:59.504 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:19:59.504 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:19:59.504 avahi-daemon 0.8 exiting. 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:59.504 rmmod nvme_tcp 00:19:59.504 rmmod nvme_fabrics 00:19:59.504 rmmod nvme_keyring 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 93961 ']' 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 93961 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 93961 ']' 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 93961 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93961 00:19:59.504 killing process with pid 93961 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93961' 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 93961 00:19:59.504 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 93961 00:19:59.761 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:59.761 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:59.761 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:59.761 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.761 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:59.761 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.761 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.761 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.761 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:59.761 00:19:59.761 real 0m19.788s 00:19:59.761 user 0m39.435s 00:19:59.761 sys 0m1.949s 00:19:59.761 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:59.761 19:35:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.761 ************************************ 00:19:59.761 END TEST nvmf_mdns_discovery 00:19:59.761 ************************************ 00:19:59.761 19:35:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:59.761 19:35:49 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:19:59.761 19:35:49 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:59.761 19:35:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:59.761 19:35:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:59.761 19:35:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:59.761 ************************************ 00:19:59.761 START TEST nvmf_host_multipath 00:19:59.761 ************************************ 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:59.761 * Looking for test storage... 00:19:59.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.761 19:35:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.762 19:35:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:00.018 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:00.019 Cannot find device "nvmf_tgt_br" 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:00.019 Cannot find device "nvmf_tgt_br2" 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:00.019 Cannot find device "nvmf_tgt_br" 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:00.019 Cannot find device "nvmf_tgt_br2" 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:00.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:00.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:00.019 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:00.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:20:00.277 00:20:00.277 --- 10.0.0.2 ping statistics --- 00:20:00.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.277 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:00.277 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:00.277 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:20:00.277 00:20:00.277 --- 10.0.0.3 ping statistics --- 00:20:00.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.277 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:00.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:00.277 00:20:00.277 --- 10.0.0.1 ping statistics --- 00:20:00.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.277 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=94584 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 94584 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94584 ']' 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.277 19:35:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:00.277 [2024-07-15 19:35:49.994556] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:20:00.277 [2024-07-15 19:35:49.994642] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.534 [2024-07-15 19:35:50.135165] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:00.534 [2024-07-15 19:35:50.203921] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.534 [2024-07-15 19:35:50.203976] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.534 [2024-07-15 19:35:50.203990] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.534 [2024-07-15 19:35:50.204000] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.534 [2024-07-15 19:35:50.204009] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.535 [2024-07-15 19:35:50.204155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.535 [2024-07-15 19:35:50.204169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.464 19:35:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.464 19:35:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:20:01.464 19:35:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:01.464 19:35:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:01.464 19:35:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:01.464 19:35:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.464 19:35:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94584 00:20:01.464 19:35:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:01.721 [2024-07-15 19:35:51.305429] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.721 19:35:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:01.978 Malloc0 00:20:01.978 19:35:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:02.236 19:35:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:02.493 19:35:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:02.813 [2024-07-15 19:35:52.339954] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.813 19:35:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:02.813 [2024-07-15 19:35:52.576140] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:02.813 19:35:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=94688 00:20:02.813 19:35:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:02.813 19:35:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:02.813 19:35:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 94688 /var/tmp/bdevperf.sock 00:20:02.813 19:35:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94688 ']' 00:20:02.813 19:35:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.813 19:35:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.813 19:35:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.813 19:35:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.071 19:35:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:04.004 19:35:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.004 19:35:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:20:04.004 19:35:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:04.262 19:35:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:20:04.520 Nvme0n1 00:20:04.778 19:35:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:05.037 Nvme0n1 00:20:05.037 19:35:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:20:05.037 19:35:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:05.972 19:35:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:20:05.972 19:35:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:06.539 19:35:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:06.797 19:35:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:20:06.797 19:35:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94781 00:20:06.797 19:35:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94584 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:06.797 19:35:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:13.392 19:36:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:13.392 19:36:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:13.392 19:36:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:13.392 19:36:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:13.392 Attaching 4 probes... 00:20:13.392 @path[10.0.0.2, 4421]: 16482 00:20:13.392 @path[10.0.0.2, 4421]: 17032 00:20:13.392 @path[10.0.0.2, 4421]: 17123 00:20:13.392 @path[10.0.0.2, 4421]: 16877 00:20:13.392 @path[10.0.0.2, 4421]: 17015 00:20:13.392 19:36:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:13.392 19:36:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:13.392 19:36:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:13.392 19:36:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:13.392 19:36:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:13.392 19:36:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:13.392 19:36:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94781 00:20:13.392 19:36:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:13.393 19:36:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:20:13.393 19:36:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:13.393 19:36:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:13.651 19:36:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:20:13.651 19:36:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94918 00:20:13.651 19:36:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94584 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:13.651 19:36:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:20.226 19:36:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:20.226 19:36:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:20.226 19:36:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:20.226 19:36:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:20.226 Attaching 4 probes... 00:20:20.226 @path[10.0.0.2, 4420]: 17143 00:20:20.226 @path[10.0.0.2, 4420]: 17210 00:20:20.226 @path[10.0.0.2, 4420]: 17271 00:20:20.226 @path[10.0.0.2, 4420]: 17474 00:20:20.226 @path[10.0.0.2, 4420]: 17049 00:20:20.226 19:36:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:20.226 19:36:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:20.226 19:36:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:20.226 19:36:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:20.226 19:36:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:20.226 19:36:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:20.226 19:36:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94918 00:20:20.226 19:36:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:20.226 19:36:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:20:20.226 19:36:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:20.226 19:36:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:20.511 19:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:20:20.511 19:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95043 00:20:20.511 19:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94584 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:20.511 19:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:27.127 19:36:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:27.127 19:36:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:27.127 19:36:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:27.127 19:36:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:27.127 Attaching 4 probes... 00:20:27.127 @path[10.0.0.2, 4421]: 13764 00:20:27.127 @path[10.0.0.2, 4421]: 17411 00:20:27.127 @path[10.0.0.2, 4421]: 17228 00:20:27.127 @path[10.0.0.2, 4421]: 17253 00:20:27.127 @path[10.0.0.2, 4421]: 17274 00:20:27.127 19:36:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:27.127 19:36:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:27.127 19:36:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:27.127 19:36:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:27.127 19:36:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:27.127 19:36:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:27.127 19:36:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95043 00:20:27.127 19:36:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:27.127 19:36:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:20:27.127 19:36:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:27.127 19:36:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:27.385 19:36:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:20:27.385 19:36:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95179 00:20:27.385 19:36:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:27.385 19:36:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94584 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:33.943 19:36:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:20:33.943 19:36:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:33.943 19:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:20:33.943 19:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:33.943 Attaching 4 probes... 00:20:33.943 00:20:33.943 00:20:33.943 00:20:33.943 00:20:33.943 00:20:33.943 19:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:33.943 19:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:33.943 19:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:33.943 19:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:20:33.943 19:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:20:33.943 19:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:20:33.943 19:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95179 00:20:33.943 19:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:33.943 19:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:20:33.943 19:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:33.943 19:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:34.201 19:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:20:34.201 19:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95310 00:20:34.201 19:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94584 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:34.201 19:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:40.759 19:36:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:40.759 19:36:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:40.759 19:36:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:40.759 19:36:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:40.759 Attaching 4 probes... 00:20:40.759 @path[10.0.0.2, 4421]: 16440 00:20:40.759 @path[10.0.0.2, 4421]: 17071 00:20:40.759 @path[10.0.0.2, 4421]: 16742 00:20:40.759 @path[10.0.0.2, 4421]: 17040 00:20:40.759 @path[10.0.0.2, 4421]: 16787 00:20:40.759 19:36:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:40.759 19:36:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:40.759 19:36:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:40.759 19:36:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:40.759 19:36:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:40.759 19:36:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:40.759 19:36:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95310 00:20:40.759 19:36:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:40.759 19:36:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:40.759 [2024-07-15 19:36:30.283622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283664] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283803] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283812] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283846] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283888] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283923] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283941] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283958] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283975] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.283993] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.284001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.284009] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.284017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.284026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.759 [2024-07-15 19:36:30.284035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.760 [2024-07-15 19:36:30.284043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.760 [2024-07-15 19:36:30.284052] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.760 [2024-07-15 19:36:30.284060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.760 [2024-07-15 19:36:30.284069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.760 [2024-07-15 19:36:30.284077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.760 [2024-07-15 19:36:30.284086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.760 [2024-07-15 19:36:30.284095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.760 [2024-07-15 19:36:30.284103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.760 [2024-07-15 19:36:30.284112] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.760 [2024-07-15 19:36:30.284120] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.760 [2024-07-15 19:36:30.284129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.760 [2024-07-15 19:36:30.284138] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.760 [2024-07-15 19:36:30.284146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.760 [2024-07-15 19:36:30.284155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.760 [2024-07-15 19:36:30.284163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.760 [2024-07-15 19:36:30.284172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.760 [2024-07-15 19:36:30.284180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f169f0 is same with the state(5) to be set 00:20:40.760 19:36:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:20:41.710 19:36:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:20:41.710 19:36:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95440 00:20:41.710 19:36:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:41.710 19:36:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94584 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:48.265 19:36:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:48.265 19:36:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:48.266 19:36:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:48.266 19:36:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:48.266 Attaching 4 probes... 00:20:48.266 @path[10.0.0.2, 4420]: 16766 00:20:48.266 @path[10.0.0.2, 4420]: 16733 00:20:48.266 @path[10.0.0.2, 4420]: 16991 00:20:48.266 @path[10.0.0.2, 4420]: 17033 00:20:48.266 @path[10.0.0.2, 4420]: 17061 00:20:48.266 19:36:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:48.266 19:36:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:48.266 19:36:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:48.266 19:36:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:48.266 19:36:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:48.266 19:36:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:48.266 19:36:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95440 00:20:48.266 19:36:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:48.266 19:36:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:48.266 [2024-07-15 19:36:37.871136] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:48.266 19:36:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:48.524 19:36:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:20:55.081 19:36:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:20:55.081 19:36:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95638 00:20:55.081 19:36:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:55.081 19:36:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94584 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:01.653 Attaching 4 probes... 00:21:01.653 @path[10.0.0.2, 4421]: 16734 00:21:01.653 @path[10.0.0.2, 4421]: 16697 00:21:01.653 @path[10.0.0.2, 4421]: 16663 00:21:01.653 @path[10.0.0.2, 4421]: 16758 00:21:01.653 @path[10.0.0.2, 4421]: 16963 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95638 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 94688 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94688 ']' 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94688 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94688 00:21:01.653 killing process with pid 94688 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94688' 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94688 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94688 00:21:01.653 Connection closed with partial response: 00:21:01.653 00:21:01.653 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 94688 00:21:01.653 19:36:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:01.653 [2024-07-15 19:35:52.666133] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:21:01.653 [2024-07-15 19:35:52.666324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94688 ] 00:21:01.653 [2024-07-15 19:35:52.813375] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.653 [2024-07-15 19:35:52.882175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.653 Running I/O for 90 seconds... 00:21:01.653 [2024-07-15 19:36:03.264525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.653 [2024-07-15 19:36:03.265200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:01.653 [2024-07-15 19:36:03.265376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.653 [2024-07-15 19:36:03.265499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:01.653 [2024-07-15 19:36:03.265598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.653 [2024-07-15 19:36:03.265689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:01.653 [2024-07-15 19:36:03.265804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.653 [2024-07-15 19:36:03.265897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:01.653 [2024-07-15 19:36:03.265989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.653 [2024-07-15 19:36:03.266080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:01.653 [2024-07-15 19:36:03.266167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.653 [2024-07-15 19:36:03.266273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:01.653 [2024-07-15 19:36:03.266410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.653 [2024-07-15 19:36:03.266502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:01.653 [2024-07-15 19:36:03.266593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.653 [2024-07-15 19:36:03.266688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:01.653 [2024-07-15 19:36:03.269448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.653 [2024-07-15 19:36:03.269592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:01.653 [2024-07-15 19:36:03.269698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.653 [2024-07-15 19:36:03.269804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.653 [2024-07-15 19:36:03.269904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.653 [2024-07-15 19:36:03.270016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:01.653 [2024-07-15 19:36:03.270116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.653 [2024-07-15 19:36:03.270200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:01.653 [2024-07-15 19:36:03.270320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.653 [2024-07-15 19:36:03.270517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:01.653 [2024-07-15 19:36:03.270621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.653 [2024-07-15 19:36:03.270719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:01.653 [2024-07-15 19:36:03.270826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.654 [2024-07-15 19:36:03.270911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.271002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.654 [2024-07-15 19:36:03.271092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.271179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.654 [2024-07-15 19:36:03.271261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.271344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.271472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.271568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.271658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.271740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.271859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.272021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.272180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.272332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.272539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.272686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.272827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.273020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.273175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.273300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.273446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.273550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.273665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.273756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.273861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.273952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.274048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.274172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.274314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.274425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.274514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.274602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.274702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.274823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.274916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.275023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.275104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.275195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.275294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.275406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.275490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.275589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.275669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.275755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.275868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.275956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.276041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.276125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.276212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.276297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.276397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.276492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.276579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.277207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.654 [2024-07-15 19:36:03.277343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.277480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.654 [2024-07-15 19:36:03.277576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.277659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.654 [2024-07-15 19:36:03.277742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.277859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.654 [2024-07-15 19:36:03.277952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.278037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.654 [2024-07-15 19:36:03.278114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.278199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.654 [2024-07-15 19:36:03.278311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.278419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.654 [2024-07-15 19:36:03.278523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.278617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.654 [2024-07-15 19:36:03.278695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.278802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.654 [2024-07-15 19:36:03.278892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.278989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.654 [2024-07-15 19:36:03.279066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.279140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.654 [2024-07-15 19:36:03.279218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.279337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.654 [2024-07-15 19:36:03.279456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.279553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.654 [2024-07-15 19:36:03.279640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:01.654 [2024-07-15 19:36:03.279722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.654 [2024-07-15 19:36:03.279825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.279916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.280007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.280096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.280178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.280267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.280344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.280460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.280556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.280646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.280728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.280843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.280938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.281034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.281112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.281198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.281278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.281385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.281486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.281582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.281659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.281745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.281847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.281941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.282027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.282111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.282217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.282320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.282434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.282546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.282641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.282731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.282833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.282926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.283015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.283119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.283209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.283298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.283408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.283502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.283579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.283668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.283773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.283915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.284016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.284120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.284208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.284302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.284406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.284502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.284580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.284668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.284752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.284867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.284956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.285041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.285122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:03.285208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:03.285291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:09.825879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.655 [2024-07-15 19:36:09.825941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:09.826016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.655 [2024-07-15 19:36:09.826038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:09.826061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.655 [2024-07-15 19:36:09.826077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:09.826099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:09.826114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:09.826136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:09.826151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:09.826172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:09.826188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:09.826219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:09.826236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:09.826258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.655 [2024-07-15 19:36:09.826273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:09.826295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.655 [2024-07-15 19:36:09.826311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:01.655 [2024-07-15 19:36:09.826332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.656 [2024-07-15 19:36:09.826347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.826382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.656 [2024-07-15 19:36:09.826400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.826422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.656 [2024-07-15 19:36:09.826437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.826459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.656 [2024-07-15 19:36:09.826495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.826519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.656 [2024-07-15 19:36:09.826535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.826556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.656 [2024-07-15 19:36:09.826571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.826592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.656 [2024-07-15 19:36:09.826607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.826629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.656 [2024-07-15 19:36:09.826644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.826665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.656 [2024-07-15 19:36:09.826680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.826701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.656 [2024-07-15 19:36:09.826716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.826753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.656 [2024-07-15 19:36:09.826770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.826792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.656 [2024-07-15 19:36:09.826807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.826829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.656 [2024-07-15 19:36:09.826844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.826866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.656 [2024-07-15 19:36:09.826881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.828271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.656 [2024-07-15 19:36:09.828300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.828333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.828380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.828411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.828427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.828453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.828469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.828494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.828510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.828536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.828551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.828576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.828592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.828618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.828633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.828659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.828674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.828700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.828715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.828741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.828757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.828783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.828798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.828824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.828839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.828865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.828888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.828915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.828931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.828957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.828972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.828998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.829013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.829039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.829054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.829080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.829096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.829121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.829138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.829164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.829180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.829206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.829221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.829247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.829263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.829289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.829304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.829329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.829345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.829384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.829401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.829435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.829451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:01.656 [2024-07-15 19:36:09.829476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.656 [2024-07-15 19:36:09.829491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.829517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.829532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.829557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.829572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.829598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.829613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.829639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.829654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.829679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.829694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.829720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.829736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.829761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.829777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.829804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.829819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.829845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.829860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.829885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.829900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.829933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.829951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.829976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.829991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:01.657 [2024-07-15 19:36:09.830871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.657 [2024-07-15 19:36:09.830886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:09.830912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:09.830927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:09.830952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:09.830974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:09.831001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:09.831017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:09.831203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:09.831231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:09.831266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:09.831282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:09.831314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:09.831329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:09.831374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:09.831392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:09.831424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:09.831440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:09.831471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:09.831486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:09.831517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:09.831533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:09.831565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:09.831581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:09.831612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:09.831628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.905251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.905324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.905396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.905431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.905486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.905503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.905525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.905540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.905561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.905577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.905598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.905613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.905634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.905649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.905671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.905686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.906975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.906990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.907013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.658 [2024-07-15 19:36:16.907028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:01.658 [2024-07-15 19:36:16.907051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.659 [2024-07-15 19:36:16.907066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.907088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.659 [2024-07-15 19:36:16.907103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.907127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.659 [2024-07-15 19:36:16.907142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.907249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.659 [2024-07-15 19:36:16.907272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.907300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.659 [2024-07-15 19:36:16.907316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.907341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.659 [2024-07-15 19:36:16.907368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.907396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.659 [2024-07-15 19:36:16.907412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.907437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.659 [2024-07-15 19:36:16.907452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.907476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.659 [2024-07-15 19:36:16.907492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.907516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.659 [2024-07-15 19:36:16.907541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.907579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.659 [2024-07-15 19:36:16.907597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.907766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.659 [2024-07-15 19:36:16.907789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.907825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.907843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.907868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.907884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.907910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.907925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.907950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.907966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.907991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.908960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.908975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.909001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.909016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.909041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.659 [2024-07-15 19:36:16.909057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:01.659 [2024-07-15 19:36:16.909083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.909098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.909123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.909139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.909164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.909180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.909399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.909427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.909459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.909487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.909517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.909532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.909560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.909576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.909604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.909619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.909646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.909662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.909690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.909705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.909733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.909748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.909776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.909791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.909818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.909834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.909862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.909878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.909906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.909922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.909949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.909965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.909993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.910015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.910060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.910102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.910146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.910189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.910245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.910288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.910331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.910400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.910444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.910487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.910531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.910574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.910633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.910676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.910720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.910764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.660 [2024-07-15 19:36:16.910807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.660 [2024-07-15 19:36:16.910850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.660 [2024-07-15 19:36:16.910893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.660 [2024-07-15 19:36:16.910936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.910964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.660 [2024-07-15 19:36:16.910979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.911007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.660 [2024-07-15 19:36:16.911022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.911050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.660 [2024-07-15 19:36:16.911065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.911093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.660 [2024-07-15 19:36:16.911108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.911136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.660 [2024-07-15 19:36:16.911157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:01.660 [2024-07-15 19:36:16.911191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.661 [2024-07-15 19:36:16.911207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:16.911235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.661 [2024-07-15 19:36:16.911250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:16.911278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.661 [2024-07-15 19:36:16.911293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:16.911323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.661 [2024-07-15 19:36:16.911339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:16.911378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.661 [2024-07-15 19:36:16.911395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:16.911434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.661 [2024-07-15 19:36:16.911450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:16.911477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.661 [2024-07-15 19:36:16.911493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:16.911521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.661 [2024-07-15 19:36:16.911536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.283552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.661 [2024-07-15 19:36:30.283621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.283682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.283703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.283727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.283743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.283765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.283804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.283828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.283843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.283865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.283880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.283902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.283917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.283939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.283953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.283975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.283990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.284011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.284026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.284047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.284062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.284083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.284098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.284120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.284135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.284156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.284171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.284193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.284207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.284228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.284243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.284273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.284289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.284311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.284327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.284348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.284381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.284405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.284421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.284448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.284463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.284483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.284498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.284520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.284534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.284555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.284570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.284604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.284622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.284644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.284659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:01.661 [2024-07-15 19:36:30.284680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.661 [2024-07-15 19:36:30.284694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.284716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.284730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.284761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.284776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.284798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.284812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.284834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.284849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.284870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.284884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.284906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.284921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.284943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.284957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.662 [2024-07-15 19:36:30.285087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.662 [2024-07-15 19:36:30.285116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.662 [2024-07-15 19:36:30.285143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.662 [2024-07-15 19:36:30.285171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.285199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257b4a0 is same with the state(5) to be set 00:21:01.662 [2024-07-15 19:36:30.285540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.285566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.285613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.285643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.285672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.285701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.285730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.285758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.285788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.285818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.285847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.285876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.285906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.285935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.285970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.285986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.286000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.286015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.286029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.286045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.286058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.286073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.286087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.286102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.286116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.286131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.286145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.286160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.286174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.286189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.662 [2024-07-15 19:36:30.286202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.286230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.662 [2024-07-15 19:36:30.286244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.286260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.662 [2024-07-15 19:36:30.286274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.286290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.662 [2024-07-15 19:36:30.286303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.286319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.662 [2024-07-15 19:36:30.286333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.286370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.662 [2024-07-15 19:36:30.286388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.286403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.662 [2024-07-15 19:36:30.286417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.662 [2024-07-15 19:36:30.286432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.662 [2024-07-15 19:36:30.286446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.286462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.663 [2024-07-15 19:36:30.286475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.286490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.663 [2024-07-15 19:36:30.286504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.286519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.663 [2024-07-15 19:36:30.286533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.286548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.663 [2024-07-15 19:36:30.286562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.286577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.663 [2024-07-15 19:36:30.286591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.286606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.663 [2024-07-15 19:36:30.286620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.286635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.663 [2024-07-15 19:36:30.286649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.286664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.663 [2024-07-15 19:36:30.286678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.286693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.663 [2024-07-15 19:36:30.286707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.286722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.663 [2024-07-15 19:36:30.286741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.286758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.663 [2024-07-15 19:36:30.286772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.286788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.286801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.286817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.286832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.286847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.286861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.286877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.286890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.286906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.286919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.286935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.286948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.286964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.286977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.286993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.287006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.287035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.287063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.287092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.287126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.287156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.287185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.287213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.663 [2024-07-15 19:36:30.287242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.663 [2024-07-15 19:36:30.287271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.663 [2024-07-15 19:36:30.287299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.663 [2024-07-15 19:36:30.287329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.663 [2024-07-15 19:36:30.287368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.663 [2024-07-15 19:36:30.287398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.663 [2024-07-15 19:36:30.287427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.287456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.287485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.287519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.287549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.287577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.287606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.287635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.663 [2024-07-15 19:36:30.287663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.663 [2024-07-15 19:36:30.287679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.287692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.287707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.287721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.287736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.287750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.287765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.287778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.287793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.287807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.287822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.287836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.287851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.287870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.287886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.287899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.287915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.287929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.287944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.287957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.287972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.287986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.664 [2024-07-15 19:36:30.288435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.288813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.288826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.309104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.309160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.309186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.309220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.309265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.309285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.309306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.309324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.664 [2024-07-15 19:36:30.309345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.664 [2024-07-15 19:36:30.309383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.665 [2024-07-15 19:36:30.309406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.665 [2024-07-15 19:36:30.309425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.665 [2024-07-15 19:36:30.309446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.665 [2024-07-15 19:36:30.309482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.665 [2024-07-15 19:36:30.309504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.665 [2024-07-15 19:36:30.309522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.665 [2024-07-15 19:36:30.309544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.665 [2024-07-15 19:36:30.309562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.665 [2024-07-15 19:36:30.309582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.665 [2024-07-15 19:36:30.309600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.665 [2024-07-15 19:36:30.309621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.665 [2024-07-15 19:36:30.309638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.665 [2024-07-15 19:36:30.309659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.665 [2024-07-15 19:36:30.309677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.665 [2024-07-15 19:36:30.309697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.665 [2024-07-15 19:36:30.309720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.665 [2024-07-15 19:36:30.309741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.665 [2024-07-15 19:36:30.309759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.665 [2024-07-15 19:36:30.309779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.665 [2024-07-15 19:36:30.309798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.665 [2024-07-15 19:36:30.309818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.665 [2024-07-15 19:36:30.309836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.665 [2024-07-15 19:36:30.309857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.665 [2024-07-15 19:36:30.309875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.665 [2024-07-15 19:36:30.309895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.665 [2024-07-15 19:36:30.309913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.665 [2024-07-15 19:36:30.309933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.665 [2024-07-15 19:36:30.309951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.665 [2024-07-15 19:36:30.310011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:01.665 [2024-07-15 19:36:30.310029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:01.665 [2024-07-15 19:36:30.310044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37936 len:8 PRP1 0x0 PRP2 0x0 00:21:01.665 [2024-07-15 19:36:30.310062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.665 [2024-07-15 19:36:30.310126] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23dd240 was disconnected and freed. reset controller. 00:21:01.665 [2024-07-15 19:36:30.310231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257b4a0 (9): Bad file descriptor 00:21:01.665 [2024-07-15 19:36:30.312017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.665 [2024-07-15 19:36:30.312557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.665 [2024-07-15 19:36:30.312601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x257b4a0 with addr=10.0.0.2, port=4421 00:21:01.665 [2024-07-15 19:36:30.312624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257b4a0 is same with the state(5) to be set 00:21:01.665 [2024-07-15 19:36:30.312685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257b4a0 (9): Bad file descriptor 00:21:01.665 [2024-07-15 19:36:30.312721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.665 [2024-07-15 19:36:30.312740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.665 [2024-07-15 19:36:30.312758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.665 [2024-07-15 19:36:30.312792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.665 [2024-07-15 19:36:30.312810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.665 [2024-07-15 19:36:40.415460] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:01.665 Received shutdown signal, test time was about 55.620911 seconds 00:21:01.665 00:21:01.665 Latency(us) 00:21:01.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.665 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:01.665 Verification LBA range: start 0x0 length 0x4000 00:21:01.665 Nvme0n1 : 55.62 7280.52 28.44 0.00 0.00 17548.39 528.76 7046430.72 00:21:01.665 =================================================================================================================== 00:21:01.665 Total : 7280.52 28.44 0.00 0.00 17548.39 528.76 7046430.72 00:21:01.665 19:36:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:01.665 19:36:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:21:01.665 19:36:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:01.665 19:36:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:21:01.665 19:36:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:01.665 19:36:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:21:01.665 19:36:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:01.665 19:36:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:21:01.665 19:36:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:01.665 19:36:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:01.665 rmmod nvme_tcp 00:21:01.665 rmmod nvme_fabrics 00:21:01.665 rmmod nvme_keyring 00:21:01.665 19:36:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 94584 ']' 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 94584 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94584 ']' 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94584 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94584 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:01.665 killing process with pid 94584 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94584' 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94584 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94584 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:01.665 00:21:01.665 real 1m1.771s 00:21:01.665 user 2m56.266s 00:21:01.665 sys 0m13.210s 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:01.665 ************************************ 00:21:01.665 19:36:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:01.665 END TEST nvmf_host_multipath 00:21:01.665 ************************************ 00:21:01.665 19:36:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:01.665 19:36:51 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:01.665 19:36:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:01.665 19:36:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:01.665 19:36:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:01.665 ************************************ 00:21:01.665 START TEST nvmf_timeout 00:21:01.665 ************************************ 00:21:01.665 19:36:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:01.665 * Looking for test storage... 00:21:01.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:01.665 19:36:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:01.665 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:01.665 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.665 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.665 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.665 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.665 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.665 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.665 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:01.666 Cannot find device "nvmf_tgt_br" 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:01.666 Cannot find device "nvmf_tgt_br2" 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:01.666 Cannot find device "nvmf_tgt_br" 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:01.666 Cannot find device "nvmf_tgt_br2" 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:21:01.666 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:01.924 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:01.924 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:01.924 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:01.924 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:21:01.924 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:01.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:01.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:21:01.925 00:21:01.925 --- 10.0.0.2 ping statistics --- 00:21:01.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.925 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:01.925 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:01.925 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:21:01.925 00:21:01.925 --- 10.0.0.3 ping statistics --- 00:21:01.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.925 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:01.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:21:01.925 00:21:01.925 --- 10.0.0.1 ping statistics --- 00:21:01.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.925 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=95953 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 95953 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 95953 ']' 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:01.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:01.925 19:36:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:02.184 [2024-07-15 19:36:51.801933] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:21:02.184 [2024-07-15 19:36:51.802051] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.184 [2024-07-15 19:36:51.947942] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:02.442 [2024-07-15 19:36:52.006764] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.442 [2024-07-15 19:36:52.006839] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.442 [2024-07-15 19:36:52.006851] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.442 [2024-07-15 19:36:52.006859] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.442 [2024-07-15 19:36:52.006867] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.442 [2024-07-15 19:36:52.007032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.442 [2024-07-15 19:36:52.007043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.442 19:36:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.442 19:36:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:02.442 19:36:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:02.442 19:36:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:02.442 19:36:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:02.442 19:36:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.442 19:36:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:02.442 19:36:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:02.700 [2024-07-15 19:36:52.348234] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.700 19:36:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:02.958 Malloc0 00:21:02.958 19:36:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:03.216 19:36:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:03.475 19:36:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:03.737 [2024-07-15 19:36:53.420914] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.737 19:36:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:03.737 19:36:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96032 00:21:03.737 19:36:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96032 /var/tmp/bdevperf.sock 00:21:03.737 19:36:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96032 ']' 00:21:03.737 19:36:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.737 19:36:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:03.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.737 19:36:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.737 19:36:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:03.737 19:36:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:03.737 [2024-07-15 19:36:53.487497] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:21:03.737 [2024-07-15 19:36:53.487587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96032 ] 00:21:04.009 [2024-07-15 19:36:53.620489] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.009 [2024-07-15 19:36:53.679064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.941 19:36:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:04.941 19:36:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:04.941 19:36:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:04.941 19:36:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:05.199 NVMe0n1 00:21:05.199 19:36:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:05.199 19:36:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96075 00:21:05.199 19:36:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:21:05.457 Running I/O for 10 seconds... 00:21:06.391 19:36:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:06.652 [2024-07-15 19:36:56.251903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db0bd0 is same with the state(5) to be set 00:21:06.652 [2024-07-15 19:36:56.251982] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db0bd0 is same with the state(5) to be set 00:21:06.652 [2024-07-15 19:36:56.251995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db0bd0 is same with the state(5) to be set 00:21:06.652 [2024-07-15 19:36:56.252003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db0bd0 is same with the state(5) to be set 00:21:06.652 [2024-07-15 19:36:56.252012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db0bd0 is same with the state(5) to be set 00:21:06.652 [2024-07-15 19:36:56.252021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db0bd0 is same with the state(5) to be set 00:21:06.652 [2024-07-15 19:36:56.252030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db0bd0 is same with the state(5) to be set 00:21:06.652 [2024-07-15 19:36:56.252039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db0bd0 is same with the state(5) to be set 00:21:06.652 [2024-07-15 19:36:56.252047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db0bd0 is same with the state(5) to be set 00:21:06.652 [2024-07-15 19:36:56.252057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db0bd0 is same with the state(5) to be set 00:21:06.652 [2024-07-15 19:36:56.252066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db0bd0 is same with the state(5) to be set 00:21:06.652 [2024-07-15 19:36:56.252074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db0bd0 is same with the state(5) to be set 00:21:06.652 [2024-07-15 19:36:56.252082] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db0bd0 is same with the state(5) to be set 00:21:06.652 [2024-07-15 19:36:56.252091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db0bd0 is same with the state(5) to be set 00:21:06.652 [2024-07-15 19:36:56.252099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db0bd0 is same with the state(5) to be set 00:21:06.652 [2024-07-15 19:36:56.253391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.652 [2024-07-15 19:36:56.253439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.652 [2024-07-15 19:36:56.253473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.652 [2024-07-15 19:36:56.253495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.652 [2024-07-15 19:36:56.253516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.652 [2024-07-15 19:36:56.253537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.652 [2024-07-15 19:36:56.253557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.652 [2024-07-15 19:36:56.253578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.652 [2024-07-15 19:36:56.253598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.652 [2024-07-15 19:36:56.253618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.652 [2024-07-15 19:36:56.253639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.652 [2024-07-15 19:36:56.253659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.652 [2024-07-15 19:36:56.253680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.652 [2024-07-15 19:36:56.253701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.652 [2024-07-15 19:36:56.253721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.652 [2024-07-15 19:36:56.253742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.652 [2024-07-15 19:36:56.253762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.652 [2024-07-15 19:36:56.253792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.652 [2024-07-15 19:36:56.253814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.652 [2024-07-15 19:36:56.253834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.652 [2024-07-15 19:36:56.253854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.652 [2024-07-15 19:36:56.253874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.652 [2024-07-15 19:36:56.253894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.652 [2024-07-15 19:36:56.253914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.652 [2024-07-15 19:36:56.253935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.652 [2024-07-15 19:36:56.253955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.652 [2024-07-15 19:36:56.253975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.253986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.652 [2024-07-15 19:36:56.253995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.254006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.652 [2024-07-15 19:36:56.254015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.254026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.652 [2024-07-15 19:36:56.254036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.254047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.652 [2024-07-15 19:36:56.254057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.254068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.652 [2024-07-15 19:36:56.254077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.254088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.652 [2024-07-15 19:36:56.254097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.652 [2024-07-15 19:36:56.254108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.652 [2024-07-15 19:36:56.254117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.653 [2024-07-15 19:36:56.254823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.653 [2024-07-15 19:36:56.254843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.653 [2024-07-15 19:36:56.254863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.653 [2024-07-15 19:36:56.254884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.653 [2024-07-15 19:36:56.254904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.653 [2024-07-15 19:36:56.254924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.653 [2024-07-15 19:36:56.254945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.254986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.254997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.653 [2024-07-15 19:36:56.255618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.653 [2024-07-15 19:36:56.255662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82016 len:8 PRP1 0x0 PRP2 0x0 00:21:06.653 [2024-07-15 19:36:56.255671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.653 [2024-07-15 19:36:56.255693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.653 [2024-07-15 19:36:56.255701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82024 len:8 PRP1 0x0 PRP2 0x0 00:21:06.653 [2024-07-15 19:36:56.255710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.653 [2024-07-15 19:36:56.255720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.653 [2024-07-15 19:36:56.255727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.255735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82032 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.255744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.255753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.255762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.255771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82040 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.255779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.255788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.255796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.255804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82048 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.255813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.255822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.255829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.255837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82056 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.255846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.255855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.255862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.255870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82064 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.255879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.255888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.255895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.255903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82072 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.255912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.255922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.255930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.255938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82080 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.255947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.255956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.255963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.255971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82088 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.255980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.255990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.255997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.256005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82096 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.256014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.256023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.256032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.256040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82104 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.256048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.256058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.256065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.256073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82112 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.256082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.256091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.256098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.256106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82120 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.256115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.256124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.256131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.256139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82128 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.256148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.256162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.256169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.256177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82136 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.256186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.256197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.256204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.256212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82144 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.256221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.256230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.256238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.256246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82152 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.256254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.256263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.256271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.256278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82160 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.256287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.256297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.256305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.256313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82168 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.256322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.256331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.256338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.256346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82176 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.256366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.256377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.256384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.256393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82184 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.256401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.256411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.256419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.256427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82192 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.256435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.256445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.654 [2024-07-15 19:36:56.256452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.654 [2024-07-15 19:36:56.256460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82200 len:8 PRP1 0x0 PRP2 0x0 00:21:06.654 [2024-07-15 19:36:56.256469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.654 [2024-07-15 19:36:56.256513] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11296f0 was disconnected and freed. reset controller. 00:21:06.654 [2024-07-15 19:36:56.256758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:06.654 [2024-07-15 19:36:56.256845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba3e0 (9): Bad file descriptor 00:21:06.654 [2024-07-15 19:36:56.256965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.654 [2024-07-15 19:36:56.256987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba3e0 with addr=10.0.0.2, port=4420 00:21:06.654 [2024-07-15 19:36:56.256998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba3e0 is same with the state(5) to be set 00:21:06.654 [2024-07-15 19:36:56.257017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba3e0 (9): Bad file descriptor 00:21:06.654 [2024-07-15 19:36:56.257034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:06.654 [2024-07-15 19:36:56.257043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:06.654 [2024-07-15 19:36:56.257053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:06.654 [2024-07-15 19:36:56.257073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.654 [2024-07-15 19:36:56.257083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:06.654 19:36:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:21:08.554 [2024-07-15 19:36:58.257383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.554 [2024-07-15 19:36:58.257458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba3e0 with addr=10.0.0.2, port=4420 00:21:08.554 [2024-07-15 19:36:58.257475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba3e0 is same with the state(5) to be set 00:21:08.554 [2024-07-15 19:36:58.257503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba3e0 (9): Bad file descriptor 00:21:08.554 [2024-07-15 19:36:58.257535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:08.554 [2024-07-15 19:36:58.257547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:08.554 [2024-07-15 19:36:58.257558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:08.554 [2024-07-15 19:36:58.257586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:08.554 [2024-07-15 19:36:58.257604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:08.554 19:36:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:21:08.554 19:36:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:08.554 19:36:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:08.812 19:36:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:21:08.812 19:36:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:21:08.813 19:36:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:08.813 19:36:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:09.071 19:36:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:21:09.071 19:36:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:21:10.970 [2024-07-15 19:37:00.257819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.970 [2024-07-15 19:37:00.257896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba3e0 with addr=10.0.0.2, port=4420 00:21:10.970 [2024-07-15 19:37:00.257912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba3e0 is same with the state(5) to be set 00:21:10.971 [2024-07-15 19:37:00.257939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba3e0 (9): Bad file descriptor 00:21:10.971 [2024-07-15 19:37:00.257958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:10.971 [2024-07-15 19:37:00.257967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:10.971 [2024-07-15 19:37:00.257979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:10.971 [2024-07-15 19:37:00.258006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.971 [2024-07-15 19:37:00.258017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:12.874 [2024-07-15 19:37:02.258135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:12.874 [2024-07-15 19:37:02.258203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:12.874 [2024-07-15 19:37:02.258223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:12.874 [2024-07-15 19:37:02.258234] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:12.874 [2024-07-15 19:37:02.258261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:13.833 00:21:13.833 Latency(us) 00:21:13.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.833 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:13.833 Verification LBA range: start 0x0 length 0x4000 00:21:13.833 NVMe0n1 : 8.17 1251.52 4.89 15.66 0.00 100871.46 2189.50 7015926.69 00:21:13.833 =================================================================================================================== 00:21:13.833 Total : 1251.52 4.89 15.66 0.00 100871.46 2189.50 7015926.69 00:21:13.833 0 00:21:14.091 19:37:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:21:14.091 19:37:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:14.091 19:37:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:14.348 19:37:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:21:14.348 19:37:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:21:14.348 19:37:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:14.348 19:37:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:14.606 19:37:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:21:14.606 19:37:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 96075 00:21:14.606 19:37:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96032 00:21:14.606 19:37:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96032 ']' 00:21:14.606 19:37:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96032 00:21:14.606 19:37:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:14.606 19:37:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:14.606 19:37:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96032 00:21:14.607 19:37:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:14.607 19:37:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:14.607 killing process with pid 96032 00:21:14.607 19:37:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96032' 00:21:14.607 19:37:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96032 00:21:14.607 Received shutdown signal, test time was about 9.277707 seconds 00:21:14.607 00:21:14.607 Latency(us) 00:21:14.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.607 =================================================================================================================== 00:21:14.607 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.607 19:37:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96032 00:21:14.865 19:37:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:15.123 [2024-07-15 19:37:04.732871] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.123 19:37:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96234 00:21:15.123 19:37:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:15.123 19:37:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96234 /var/tmp/bdevperf.sock 00:21:15.123 19:37:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96234 ']' 00:21:15.123 19:37:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.123 19:37:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.123 19:37:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.123 19:37:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.123 19:37:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:15.123 [2024-07-15 19:37:04.814023] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:21:15.123 [2024-07-15 19:37:04.814144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96234 ] 00:21:15.382 [2024-07-15 19:37:04.952544] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.382 [2024-07-15 19:37:05.041387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.316 19:37:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.316 19:37:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:16.316 19:37:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:16.316 19:37:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:21:16.883 NVMe0n1 00:21:16.883 19:37:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96276 00:21:16.883 19:37:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:16.883 19:37:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:21:16.883 Running I/O for 10 seconds... 00:21:17.817 19:37:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.079 [2024-07-15 19:37:07.641574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faeb30 is same with the state(5) to be set 00:21:18.079 [2024-07-15 19:37:07.641637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faeb30 is same with the state(5) to be set 00:21:18.079 [2024-07-15 19:37:07.641649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faeb30 is same with the state(5) to be set 00:21:18.079 [2024-07-15 19:37:07.641658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faeb30 is same with the state(5) to be set 00:21:18.079 [2024-07-15 19:37:07.641667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faeb30 is same with the state(5) to be set 00:21:18.079 [2024-07-15 19:37:07.641676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faeb30 is same with the state(5) to be set 00:21:18.079 [2024-07-15 19:37:07.641684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faeb30 is same with the state(5) to be set 00:21:18.079 [2024-07-15 19:37:07.641693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faeb30 is same with the state(5) to be set 00:21:18.079 [2024-07-15 19:37:07.642900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.079 [2024-07-15 19:37:07.642949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.642973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.642985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.642999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.079 [2024-07-15 19:37:07.643700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.079 [2024-07-15 19:37:07.643721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.079 [2024-07-15 19:37:07.643744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.079 [2024-07-15 19:37:07.643771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.079 [2024-07-15 19:37:07.643792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.079 [2024-07-15 19:37:07.643814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.079 [2024-07-15 19:37:07.643836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.079 [2024-07-15 19:37:07.643857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.079 [2024-07-15 19:37:07.643879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.079 [2024-07-15 19:37:07.643934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.079 [2024-07-15 19:37:07.643944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.643955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.643965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.643977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.643987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.643999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.080 [2024-07-15 19:37:07.644888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.080 [2024-07-15 19:37:07.644898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.644910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.081 [2024-07-15 19:37:07.644920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.644931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:18.081 [2024-07-15 19:37:07.644941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.644972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.644984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80168 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.644994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80176 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80184 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80192 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80200 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80208 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80216 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80224 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80232 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80240 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80248 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80256 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80264 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80272 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80280 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80288 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80296 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80304 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80312 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80320 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80328 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80336 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80344 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.081 [2024-07-15 19:37:07.645794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.081 [2024-07-15 19:37:07.645802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.081 [2024-07-15 19:37:07.645810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79408 len:8 PRP1 0x0 PRP2 0x0 00:21:18.081 [2024-07-15 19:37:07.645819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.082 [2024-07-15 19:37:07.645829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.082 [2024-07-15 19:37:07.645836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.082 [2024-07-15 19:37:07.645844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79416 len:8 PRP1 0x0 PRP2 0x0 00:21:18.082 [2024-07-15 19:37:07.645853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.082 [2024-07-15 19:37:07.645862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.082 [2024-07-15 19:37:07.645870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.082 [2024-07-15 19:37:07.645878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79424 len:8 PRP1 0x0 PRP2 0x0 00:21:18.082 [2024-07-15 19:37:07.645888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.082 [2024-07-15 19:37:07.645897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.082 [2024-07-15 19:37:07.645907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.082 [2024-07-15 19:37:07.645915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79432 len:8 PRP1 0x0 PRP2 0x0 00:21:18.082 [2024-07-15 19:37:07.645924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.082 [2024-07-15 19:37:07.645933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.082 [2024-07-15 19:37:07.645940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.082 [2024-07-15 19:37:07.645948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79440 len:8 PRP1 0x0 PRP2 0x0 00:21:18.082 [2024-07-15 19:37:07.645957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.082 [2024-07-15 19:37:07.645967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.082 [2024-07-15 19:37:07.645974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.082 [2024-07-15 19:37:07.645982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79448 len:8 PRP1 0x0 PRP2 0x0 00:21:18.082 [2024-07-15 19:37:07.645991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.082 [2024-07-15 19:37:07.646001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.082 [2024-07-15 19:37:07.646008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.082 [2024-07-15 19:37:07.646016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79456 len:8 PRP1 0x0 PRP2 0x0 00:21:18.082 [2024-07-15 19:37:07.646025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.082 [2024-07-15 19:37:07.646034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.082 [2024-07-15 19:37:07.646042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.082 [2024-07-15 19:37:07.646050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79464 len:8 PRP1 0x0 PRP2 0x0 00:21:18.082 [2024-07-15 19:37:07.646059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.082 [2024-07-15 19:37:07.646068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.082 [2024-07-15 19:37:07.646075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.082 [2024-07-15 19:37:07.646083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79472 len:8 PRP1 0x0 PRP2 0x0 00:21:18.082 [2024-07-15 19:37:07.646092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.082 [2024-07-15 19:37:07.646102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.082 [2024-07-15 19:37:07.646109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.082 [2024-07-15 19:37:07.646117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79480 len:8 PRP1 0x0 PRP2 0x0 00:21:18.082 [2024-07-15 19:37:07.646126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.082 [2024-07-15 19:37:07.646135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.082 [2024-07-15 19:37:07.646143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.082 [2024-07-15 19:37:07.646151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79488 len:8 PRP1 0x0 PRP2 0x0 00:21:18.082 [2024-07-15 19:37:07.646162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.082 [2024-07-15 19:37:07.646171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.082 [2024-07-15 19:37:07.646180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.082 [2024-07-15 19:37:07.646189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79496 len:8 PRP1 0x0 PRP2 0x0 00:21:18.082 [2024-07-15 19:37:07.646198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.082 [2024-07-15 19:37:07.646218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.082 [2024-07-15 19:37:07.646227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.082 [2024-07-15 19:37:07.646235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79504 len:8 PRP1 0x0 PRP2 0x0 00:21:18.082 [2024-07-15 19:37:07.646245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.082 [2024-07-15 19:37:07.646254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.082 [2024-07-15 19:37:07.657057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.082 [2024-07-15 19:37:07.657091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79512 len:8 PRP1 0x0 PRP2 0x0 00:21:18.082 [2024-07-15 19:37:07.657104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.082 [2024-07-15 19:37:07.657120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.082 [2024-07-15 19:37:07.657129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.082 [2024-07-15 19:37:07.657138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79520 len:8 PRP1 0x0 PRP2 0x0 00:21:18.082 [2024-07-15 19:37:07.657147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.082 [2024-07-15 19:37:07.657202] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20086f0 was disconnected and freed. reset controller. 00:21:18.082 [2024-07-15 19:37:07.657325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.082 [2024-07-15 19:37:07.657343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.082 [2024-07-15 19:37:07.657370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.082 [2024-07-15 19:37:07.657382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.082 [2024-07-15 19:37:07.657393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.082 [2024-07-15 19:37:07.657403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.082 [2024-07-15 19:37:07.657413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.082 [2024-07-15 19:37:07.657422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.082 [2024-07-15 19:37:07.657431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f993e0 is same with the state(5) to be set 00:21:18.082 [2024-07-15 19:37:07.657670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:18.082 [2024-07-15 19:37:07.657704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f993e0 (9): Bad file descriptor 00:21:18.082 [2024-07-15 19:37:07.657806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.082 [2024-07-15 19:37:07.657828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f993e0 with addr=10.0.0.2, port=4420 00:21:18.082 [2024-07-15 19:37:07.657840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f993e0 is same with the state(5) to be set 00:21:18.082 [2024-07-15 19:37:07.657859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f993e0 (9): Bad file descriptor 00:21:18.082 [2024-07-15 19:37:07.657875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:18.082 [2024-07-15 19:37:07.657885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:18.082 [2024-07-15 19:37:07.657896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:18.082 [2024-07-15 19:37:07.657915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.082 [2024-07-15 19:37:07.657926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:18.082 19:37:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:21:19.043 [2024-07-15 19:37:08.658072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.043 [2024-07-15 19:37:08.658151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f993e0 with addr=10.0.0.2, port=4420 00:21:19.043 [2024-07-15 19:37:08.658167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f993e0 is same with the state(5) to be set 00:21:19.043 [2024-07-15 19:37:08.658194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f993e0 (9): Bad file descriptor 00:21:19.043 [2024-07-15 19:37:08.658224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:19.043 [2024-07-15 19:37:08.658235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:19.043 [2024-07-15 19:37:08.658246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:19.043 [2024-07-15 19:37:08.658275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:19.043 [2024-07-15 19:37:08.658287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:19.043 19:37:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.302 [2024-07-15 19:37:08.928767] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.302 19:37:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 96276 00:21:19.868 [2024-07-15 19:37:09.671251] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:27.996 00:21:27.996 Latency(us) 00:21:27.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.996 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:27.996 Verification LBA range: start 0x0 length 0x4000 00:21:27.996 NVMe0n1 : 10.01 6280.65 24.53 0.00 0.00 20351.86 2144.81 3035150.89 00:21:27.996 =================================================================================================================== 00:21:27.996 Total : 6280.65 24.53 0.00 0.00 20351.86 2144.81 3035150.89 00:21:27.996 0 00:21:27.996 19:37:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96394 00:21:27.996 19:37:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:27.996 19:37:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:21:27.996 Running I/O for 10 seconds... 00:21:27.996 19:37:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:28.258 [2024-07-15 19:37:17.836777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836846] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836923] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836965] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836989] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.836998] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837014] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837065] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837082] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837107] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837115] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837124] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837148] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837167] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837184] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837225] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837233] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.837241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07840 is same with the state(5) to be set 00:21:28.258 [2024-07-15 19:37:17.839175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.258 [2024-07-15 19:37:17.839207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.258 [2024-07-15 19:37:17.839228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.258 [2024-07-15 19:37:17.839241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.258 [2024-07-15 19:37:17.839254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.258 [2024-07-15 19:37:17.839264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.258 [2024-07-15 19:37:17.839276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.258 [2024-07-15 19:37:17.839286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.258 [2024-07-15 19:37:17.839298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.258 [2024-07-15 19:37:17.839308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.258 [2024-07-15 19:37:17.839319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.258 [2024-07-15 19:37:17.839329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.258 [2024-07-15 19:37:17.839341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.258 [2024-07-15 19:37:17.839350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.258 [2024-07-15 19:37:17.839377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.258 [2024-07-15 19:37:17.839389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.258 [2024-07-15 19:37:17.839401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.258 [2024-07-15 19:37:17.839411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.258 [2024-07-15 19:37:17.839422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.258 [2024-07-15 19:37:17.839432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.258 [2024-07-15 19:37:17.839444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.839453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.839475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.839496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.839518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.839541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.839562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.839982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.839991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.840002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.259 [2024-07-15 19:37:17.840012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.840024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.840034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.840046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.840055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.840067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.840078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.840090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.840100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.840111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.840121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.840133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.840143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.840154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.840164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.840175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.840186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.840198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.840208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.840219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.840229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.840240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.840250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.840262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.840272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.840284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.840294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.840306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.840316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.840327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.840337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.840349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.259 [2024-07-15 19:37:17.840369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.259 [2024-07-15 19:37:17.840382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.260 [2024-07-15 19:37:17.840867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.260 [2024-07-15 19:37:17.840888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.260 [2024-07-15 19:37:17.840910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.260 [2024-07-15 19:37:17.840931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.260 [2024-07-15 19:37:17.840952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.260 [2024-07-15 19:37:17.840973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.840984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.260 [2024-07-15 19:37:17.840994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.841005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.260 [2024-07-15 19:37:17.841015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.841026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.260 [2024-07-15 19:37:17.841036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.841047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.260 [2024-07-15 19:37:17.841057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.841068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.260 [2024-07-15 19:37:17.841078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.841090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.260 [2024-07-15 19:37:17.841100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.841111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.260 [2024-07-15 19:37:17.841121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.841132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.260 [2024-07-15 19:37:17.841142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.841156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.260 [2024-07-15 19:37:17.841166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.841177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.260 [2024-07-15 19:37:17.841187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.841198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.260 [2024-07-15 19:37:17.841207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.841218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.260 [2024-07-15 19:37:17.841229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.841240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.260 [2024-07-15 19:37:17.841250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.841261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.260 [2024-07-15 19:37:17.841271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.260 [2024-07-15 19:37:17.841283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.261 [2024-07-15 19:37:17.841292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.261 [2024-07-15 19:37:17.841313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.261 [2024-07-15 19:37:17.841335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.261 [2024-07-15 19:37:17.841365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.261 [2024-07-15 19:37:17.841388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.261 [2024-07-15 19:37:17.841410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.261 [2024-07-15 19:37:17.841431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.261 [2024-07-15 19:37:17.841452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.261 [2024-07-15 19:37:17.841473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.261 [2024-07-15 19:37:17.841494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.261 [2024-07-15 19:37:17.841516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.261 [2024-07-15 19:37:17.841537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.261 [2024-07-15 19:37:17.841558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.261 [2024-07-15 19:37:17.841596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81368 len:8 PRP1 0x0 PRP2 0x0 00:21:28.261 [2024-07-15 19:37:17.841606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:28.261 [2024-07-15 19:37:17.841627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.261 [2024-07-15 19:37:17.841635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81376 len:8 PRP1 0x0 PRP2 0x0 00:21:28.261 [2024-07-15 19:37:17.841644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:28.261 [2024-07-15 19:37:17.841660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.261 [2024-07-15 19:37:17.841668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81384 len:8 PRP1 0x0 PRP2 0x0 00:21:28.261 [2024-07-15 19:37:17.841677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:28.261 [2024-07-15 19:37:17.841694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.261 [2024-07-15 19:37:17.841701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81392 len:8 PRP1 0x0 PRP2 0x0 00:21:28.261 [2024-07-15 19:37:17.841710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:28.261 [2024-07-15 19:37:17.841727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.261 [2024-07-15 19:37:17.841739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81400 len:8 PRP1 0x0 PRP2 0x0 00:21:28.261 [2024-07-15 19:37:17.841749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:28.261 [2024-07-15 19:37:17.841765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.261 [2024-07-15 19:37:17.841773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81408 len:8 PRP1 0x0 PRP2 0x0 00:21:28.261 [2024-07-15 19:37:17.841782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:28.261 [2024-07-15 19:37:17.841799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.261 [2024-07-15 19:37:17.841807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81416 len:8 PRP1 0x0 PRP2 0x0 00:21:28.261 [2024-07-15 19:37:17.841816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:28.261 [2024-07-15 19:37:17.841832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.261 [2024-07-15 19:37:17.841840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81424 len:8 PRP1 0x0 PRP2 0x0 00:21:28.261 [2024-07-15 19:37:17.841849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:28.261 [2024-07-15 19:37:17.841865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.261 [2024-07-15 19:37:17.841873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81432 len:8 PRP1 0x0 PRP2 0x0 00:21:28.261 [2024-07-15 19:37:17.841885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:28.261 [2024-07-15 19:37:17.841903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.261 [2024-07-15 19:37:17.841910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81440 len:8 PRP1 0x0 PRP2 0x0 00:21:28.261 [2024-07-15 19:37:17.841919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:28.261 [2024-07-15 19:37:17.841936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.261 [2024-07-15 19:37:17.841943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81448 len:8 PRP1 0x0 PRP2 0x0 00:21:28.261 [2024-07-15 19:37:17.841952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:28.261 [2024-07-15 19:37:17.841969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.261 [2024-07-15 19:37:17.841977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81456 len:8 PRP1 0x0 PRP2 0x0 00:21:28.261 [2024-07-15 19:37:17.841986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.841995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:28.261 [2024-07-15 19:37:17.842002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.261 [2024-07-15 19:37:17.842012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81464 len:8 PRP1 0x0 PRP2 0x0 00:21:28.261 [2024-07-15 19:37:17.842021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.842030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:28.261 [2024-07-15 19:37:17.842037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.261 [2024-07-15 19:37:17.842045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81472 len:8 PRP1 0x0 PRP2 0x0 00:21:28.261 [2024-07-15 19:37:17.842054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.842063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:28.261 [2024-07-15 19:37:17.842071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.261 [2024-07-15 19:37:17.842078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81480 len:8 PRP1 0x0 PRP2 0x0 00:21:28.261 [2024-07-15 19:37:17.842087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.842096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:28.261 [2024-07-15 19:37:17.842104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.261 [2024-07-15 19:37:17.842112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81488 len:8 PRP1 0x0 PRP2 0x0 00:21:28.261 [2024-07-15 19:37:17.842122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.851882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:28.261 [2024-07-15 19:37:17.851918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.261 [2024-07-15 19:37:17.851934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81496 len:8 PRP1 0x0 PRP2 0x0 00:21:28.261 [2024-07-15 19:37:17.851949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.261 [2024-07-15 19:37:17.851964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:28.261 [2024-07-15 19:37:17.851975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.261 [2024-07-15 19:37:17.851987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81504 len:8 PRP1 0x0 PRP2 0x0 00:21:28.261 [2024-07-15 19:37:17.852000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.262 [2024-07-15 19:37:17.852013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:28.262 [2024-07-15 19:37:17.852023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.262 [2024-07-15 19:37:17.852034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81512 len:8 PRP1 0x0 PRP2 0x0 00:21:28.262 [2024-07-15 19:37:17.852047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.262 [2024-07-15 19:37:17.852060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:28.262 [2024-07-15 19:37:17.852071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:28.262 [2024-07-15 19:37:17.852082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81520 len:8 PRP1 0x0 PRP2 0x0 00:21:28.262 [2024-07-15 19:37:17.852094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.262 [2024-07-15 19:37:17.852150] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200bb40 was disconnected and freed. reset controller. 00:21:28.262 [2024-07-15 19:37:17.852286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:28.262 [2024-07-15 19:37:17.852310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.262 [2024-07-15 19:37:17.852336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:28.262 [2024-07-15 19:37:17.852348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.262 [2024-07-15 19:37:17.852360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:28.262 [2024-07-15 19:37:17.852388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.262 [2024-07-15 19:37:17.852402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:28.262 [2024-07-15 19:37:17.852413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.262 [2024-07-15 19:37:17.852425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f993e0 is same with the state(5) to be set 00:21:28.262 [2024-07-15 19:37:17.852700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:28.262 [2024-07-15 19:37:17.852726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f993e0 (9): Bad file descriptor 00:21:28.262 [2024-07-15 19:37:17.852841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:28.262 [2024-07-15 19:37:17.852868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f993e0 with addr=10.0.0.2, port=4420 00:21:28.262 [2024-07-15 19:37:17.852881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f993e0 is same with the state(5) to be set 00:21:28.262 [2024-07-15 19:37:17.852903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f993e0 (9): Bad file descriptor 00:21:28.262 [2024-07-15 19:37:17.852922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:28.262 [2024-07-15 19:37:17.852934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:28.262 [2024-07-15 19:37:17.852947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:28.262 [2024-07-15 19:37:17.852971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:28.262 [2024-07-15 19:37:17.852984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:28.262 19:37:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:21:29.196 [2024-07-15 19:37:18.853127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:29.196 [2024-07-15 19:37:18.853201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f993e0 with addr=10.0.0.2, port=4420 00:21:29.196 [2024-07-15 19:37:18.853218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f993e0 is same with the state(5) to be set 00:21:29.196 [2024-07-15 19:37:18.853244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f993e0 (9): Bad file descriptor 00:21:29.196 [2024-07-15 19:37:18.853263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:29.196 [2024-07-15 19:37:18.853273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:29.196 [2024-07-15 19:37:18.853284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:29.196 [2024-07-15 19:37:18.853309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:29.196 [2024-07-15 19:37:18.853321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:30.129 [2024-07-15 19:37:19.853481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.129 [2024-07-15 19:37:19.853551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f993e0 with addr=10.0.0.2, port=4420 00:21:30.129 [2024-07-15 19:37:19.853566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f993e0 is same with the state(5) to be set 00:21:30.129 [2024-07-15 19:37:19.853592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f993e0 (9): Bad file descriptor 00:21:30.129 [2024-07-15 19:37:19.853611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:30.129 [2024-07-15 19:37:19.853621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:30.129 [2024-07-15 19:37:19.853632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:30.129 [2024-07-15 19:37:19.853659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.129 [2024-07-15 19:37:19.853671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:31.064 [2024-07-15 19:37:20.857400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:31.064 [2024-07-15 19:37:20.857460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f993e0 with addr=10.0.0.2, port=4420 00:21:31.064 [2024-07-15 19:37:20.857476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f993e0 is same with the state(5) to be set 00:21:31.064 [2024-07-15 19:37:20.857746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f993e0 (9): Bad file descriptor 00:21:31.064 [2024-07-15 19:37:20.857999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:31.064 [2024-07-15 19:37:20.858012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:31.064 [2024-07-15 19:37:20.858024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:31.064 [2024-07-15 19:37:20.861983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:31.064 [2024-07-15 19:37:20.862012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:31.323 19:37:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.323 [2024-07-15 19:37:21.115864] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.581 19:37:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 96394 00:21:32.146 [2024-07-15 19:37:21.898641] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:37.410 00:21:37.410 Latency(us) 00:21:37.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.410 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:37.410 Verification LBA range: start 0x0 length 0x4000 00:21:37.410 NVMe0n1 : 10.01 5223.04 20.40 3555.76 0.00 14537.58 655.36 3019898.88 00:21:37.410 =================================================================================================================== 00:21:37.410 Total : 5223.04 20.40 3555.76 0.00 14537.58 0.00 3019898.88 00:21:37.410 0 00:21:37.410 19:37:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96234 00:21:37.410 19:37:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96234 ']' 00:21:37.410 19:37:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96234 00:21:37.410 19:37:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:37.410 19:37:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:37.410 19:37:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96234 00:21:37.410 killing process with pid 96234 00:21:37.410 Received shutdown signal, test time was about 10.000000 seconds 00:21:37.410 00:21:37.410 Latency(us) 00:21:37.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.410 =================================================================================================================== 00:21:37.410 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:37.410 19:37:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:37.410 19:37:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:37.411 19:37:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96234' 00:21:37.411 19:37:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96234 00:21:37.411 19:37:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96234 00:21:37.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:37.411 19:37:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96521 00:21:37.411 19:37:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:21:37.411 19:37:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96521 /var/tmp/bdevperf.sock 00:21:37.411 19:37:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96521 ']' 00:21:37.411 19:37:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:37.411 19:37:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:37.411 19:37:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:37.411 19:37:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:37.411 19:37:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:37.411 [2024-07-15 19:37:26.926866] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:21:37.411 [2024-07-15 19:37:26.927195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96521 ] 00:21:37.411 [2024-07-15 19:37:27.068590] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.411 [2024-07-15 19:37:27.129161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.342 19:37:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:38.342 19:37:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:38.342 19:37:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96549 00:21:38.342 19:37:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96521 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:21:38.342 19:37:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:21:38.600 19:37:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:38.857 NVMe0n1 00:21:38.858 19:37:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:38.858 19:37:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96602 00:21:38.858 19:37:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:21:39.138 Running I/O for 10 seconds... 00:21:40.108 19:37:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:40.108 [2024-07-15 19:37:29.840349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.840877] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.840991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.841068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.841133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.841187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.841278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.841342] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.841442] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.841531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.841595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.841657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.841719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.841780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.841864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.841938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.842017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.842099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.842168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.842248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.842321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.842433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.842518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.842596] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.842679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.842742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.842806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.842875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.842957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.843035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.843119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.843187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.843248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.843326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.843424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.843506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.843576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.843637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.843715] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.108 [2024-07-15 19:37:29.843783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.843847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.843908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.843970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.844043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.844104] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.844166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.844227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.844310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.844387] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.844464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.844520] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.844597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.844675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.844737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.844798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.844859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.844922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.844983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.845049] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.845127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.845209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.845303] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.845401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.845506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.845606] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.845666] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.845760] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.845826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.845894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.845975] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.846052] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.846115] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.846177] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.846278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.846345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.846445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.846547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.846629] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.846700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.846797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.846928] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.847042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.847153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.847252] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.847347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.847476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.847570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.847660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.847760] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.847866] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.847958] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.848066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.848163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.848242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.848337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.848440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.848509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.848571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.848633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.848695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.848756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.848818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.848921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.849031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.849150] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.849257] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.849343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.849476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.849576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.849664] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.849746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.849813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.849877] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.849939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.850000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.850062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.850143] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.850229] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.850333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.850426] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.850502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.850572] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.850699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.850765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.850827] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.850889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a950 is same with the state(5) to be set 00:21:40.109 [2024-07-15 19:37:29.851193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.109 [2024-07-15 19:37:29.851237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:68480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.851977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.851988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.852000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.852009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.852021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.852030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.852041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.852051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.852062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.852071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.110 [2024-07-15 19:37:29.852083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.110 [2024-07-15 19:37:29.852092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.111 [2024-07-15 19:37:29.852861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.111 [2024-07-15 19:37:29.852871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.852882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.852891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.852903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.852912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.852924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.852933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.852944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.852953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.852965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.852974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.852985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.852994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:123296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.112 [2024-07-15 19:37:29.853651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.112 [2024-07-15 19:37:29.853662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.113 [2024-07-15 19:37:29.853672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.113 [2024-07-15 19:37:29.853683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.113 [2024-07-15 19:37:29.853693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.113 [2024-07-15 19:37:29.853704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.113 [2024-07-15 19:37:29.853713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.113 [2024-07-15 19:37:29.853725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.113 [2024-07-15 19:37:29.853734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.113 [2024-07-15 19:37:29.853745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.113 [2024-07-15 19:37:29.853755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.113 [2024-07-15 19:37:29.853766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.113 [2024-07-15 19:37:29.853776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.113 [2024-07-15 19:37:29.853787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.113 [2024-07-15 19:37:29.853797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.113 [2024-07-15 19:37:29.853808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.113 [2024-07-15 19:37:29.853817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.113 [2024-07-15 19:37:29.853829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.113 [2024-07-15 19:37:29.853838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.113 [2024-07-15 19:37:29.853849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:118552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.113 [2024-07-15 19:37:29.853858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.113 [2024-07-15 19:37:29.853870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.113 [2024-07-15 19:37:29.853880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.113 [2024-07-15 19:37:29.853891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.113 [2024-07-15 19:37:29.853900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.113 [2024-07-15 19:37:29.853923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.113 [2024-07-15 19:37:29.853933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.113 [2024-07-15 19:37:29.853946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.113 [2024-07-15 19:37:29.853956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.113 [2024-07-15 19:37:29.853967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b56f0 is same with the state(5) to be set 00:21:40.113 [2024-07-15 19:37:29.853979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:40.113 [2024-07-15 19:37:29.853988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:40.113 [2024-07-15 19:37:29.853997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26280 len:8 PRP1 0x0 PRP2 0x0 00:21:40.113 [2024-07-15 19:37:29.854006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.113 [2024-07-15 19:37:29.854051] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22b56f0 was disconnected and freed. reset controller. 00:21:40.113 [2024-07-15 19:37:29.854144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.113 [2024-07-15 19:37:29.854168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.113 [2024-07-15 19:37:29.854180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.113 [2024-07-15 19:37:29.854189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.113 [2024-07-15 19:37:29.854199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.113 [2024-07-15 19:37:29.854221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.113 [2024-07-15 19:37:29.854235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.113 [2024-07-15 19:37:29.854244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.113 [2024-07-15 19:37:29.854253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22463e0 is same with the state(5) to be set 00:21:40.113 [2024-07-15 19:37:29.854511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:40.113 [2024-07-15 19:37:29.854543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22463e0 (9): Bad file descriptor 00:21:40.113 [2024-07-15 19:37:29.860775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22463e0 (9): Bad file descriptor 00:21:40.113 [2024-07-15 19:37:29.860815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:40.113 [2024-07-15 19:37:29.860827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:40.113 [2024-07-15 19:37:29.860838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:40.113 [2024-07-15 19:37:29.860859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.113 [2024-07-15 19:37:29.860871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:40.113 19:37:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 96602 00:21:42.640 [2024-07-15 19:37:31.861039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.640 [2024-07-15 19:37:31.861572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22463e0 with addr=10.0.0.2, port=4420 00:21:42.640 [2024-07-15 19:37:31.861687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22463e0 is same with the state(5) to be set 00:21:42.640 [2024-07-15 19:37:31.861779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22463e0 (9): Bad file descriptor 00:21:42.640 [2024-07-15 19:37:31.861889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:42.640 [2024-07-15 19:37:31.861971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:42.640 [2024-07-15 19:37:31.862055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:42.640 [2024-07-15 19:37:31.862167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.640 [2024-07-15 19:37:31.862282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:44.541 [2024-07-15 19:37:33.862571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.541 [2024-07-15 19:37:33.863050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22463e0 with addr=10.0.0.2, port=4420 00:21:44.541 [2024-07-15 19:37:33.863170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22463e0 is same with the state(5) to be set 00:21:44.541 [2024-07-15 19:37:33.863276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22463e0 (9): Bad file descriptor 00:21:44.541 [2024-07-15 19:37:33.863405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:44.541 [2024-07-15 19:37:33.863545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:44.541 [2024-07-15 19:37:33.863639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:44.541 [2024-07-15 19:37:33.863720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:44.541 [2024-07-15 19:37:33.863784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.465 [2024-07-15 19:37:35.863910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.465 [2024-07-15 19:37:35.864409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.465 [2024-07-15 19:37:35.864525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.465 [2024-07-15 19:37:35.864608] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:46.465 [2024-07-15 19:37:35.864709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.401 00:21:47.401 Latency(us) 00:21:47.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.401 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:21:47.401 NVMe0n1 : 8.14 2388.46 9.33 15.72 0.00 53220.15 2472.49 7046430.72 00:21:47.401 =================================================================================================================== 00:21:47.401 Total : 2388.46 9.33 15.72 0.00 53220.15 2472.49 7046430.72 00:21:47.401 0 00:21:47.401 19:37:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:47.401 Attaching 5 probes... 00:21:47.401 1439.981386: reset bdev controller NVMe0 00:21:47.401 1440.063808: reconnect bdev controller NVMe0 00:21:47.401 3446.370939: reconnect delay bdev controller NVMe0 00:21:47.401 3446.393622: reconnect bdev controller NVMe0 00:21:47.401 5447.890516: reconnect delay bdev controller NVMe0 00:21:47.401 5447.916814: reconnect bdev controller NVMe0 00:21:47.401 7449.355373: reconnect delay bdev controller NVMe0 00:21:47.401 7449.378509: reconnect bdev controller NVMe0 00:21:47.401 19:37:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:21:47.401 19:37:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:21:47.401 19:37:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 96549 00:21:47.401 19:37:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:47.401 19:37:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96521 00:21:47.401 19:37:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96521 ']' 00:21:47.401 19:37:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96521 00:21:47.401 19:37:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:47.401 19:37:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:47.401 19:37:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96521 00:21:47.401 19:37:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:47.401 19:37:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:47.401 killing process with pid 96521 00:21:47.401 19:37:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96521' 00:21:47.401 19:37:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96521 00:21:47.401 Received shutdown signal, test time was about 8.200716 seconds 00:21:47.401 00:21:47.401 Latency(us) 00:21:47.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.401 =================================================================================================================== 00:21:47.401 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:47.401 19:37:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96521 00:21:47.401 19:37:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:47.659 19:37:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:47.660 rmmod nvme_tcp 00:21:47.660 rmmod nvme_fabrics 00:21:47.660 rmmod nvme_keyring 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 95953 ']' 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 95953 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 95953 ']' 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 95953 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95953 00:21:47.660 killing process with pid 95953 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95953' 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 95953 00:21:47.660 19:37:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 95953 00:21:47.918 19:37:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:47.918 19:37:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:47.918 19:37:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:47.918 19:37:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:47.918 19:37:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:47.918 19:37:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.919 19:37:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.919 19:37:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.919 19:37:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:47.919 ************************************ 00:21:47.919 END TEST nvmf_timeout 00:21:47.919 ************************************ 00:21:47.919 00:21:47.919 real 0m46.391s 00:21:47.919 user 2m17.740s 00:21:47.919 sys 0m4.619s 00:21:47.919 19:37:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:47.919 19:37:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:48.177 19:37:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:48.177 19:37:37 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:21:48.177 19:37:37 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:21:48.177 19:37:37 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:48.177 19:37:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:48.177 19:37:37 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:21:48.177 ************************************ 00:21:48.177 END TEST nvmf_tcp 00:21:48.177 ************************************ 00:21:48.177 00:21:48.177 real 15m35.025s 00:21:48.177 user 41m47.385s 00:21:48.177 sys 3m16.353s 00:21:48.177 19:37:37 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:48.177 19:37:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:48.177 19:37:37 -- common/autotest_common.sh@1142 -- # return 0 00:21:48.177 19:37:37 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:21:48.177 19:37:37 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:48.177 19:37:37 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:48.177 19:37:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:48.177 19:37:37 -- common/autotest_common.sh@10 -- # set +x 00:21:48.177 ************************************ 00:21:48.178 START TEST spdkcli_nvmf_tcp 00:21:48.178 ************************************ 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:48.178 * Looking for test storage... 00:21:48.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=96816 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 96816 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 96816 ']' 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:48.178 19:37:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:48.437 [2024-07-15 19:37:37.992330] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:21:48.437 [2024-07-15 19:37:37.992448] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96816 ] 00:21:48.437 [2024-07-15 19:37:38.129937] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:48.437 [2024-07-15 19:37:38.190940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.437 [2024-07-15 19:37:38.190950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.696 19:37:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:48.696 19:37:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:21:48.696 19:37:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:21:48.696 19:37:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:48.696 19:37:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:48.696 19:37:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:21:48.696 19:37:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:21:48.696 19:37:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:21:48.696 19:37:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:48.696 19:37:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:48.696 19:37:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:48.696 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:48.696 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:21:48.696 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:21:48.696 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:21:48.696 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:21:48.696 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:21:48.696 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:48.696 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:21:48.696 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:21:48.696 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:48.696 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:48.696 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:21:48.696 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:48.696 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:48.696 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:21:48.696 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:48.696 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:48.696 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:48.696 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:48.696 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:21:48.696 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:21:48.696 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:48.696 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:21:48.696 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:48.696 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:21:48.696 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:21:48.696 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:21:48.696 ' 00:21:51.227 [2024-07-15 19:37:40.956852] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.620 [2024-07-15 19:37:42.221932] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:21:55.144 [2024-07-15 19:37:44.587596] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:21:57.047 [2024-07-15 19:37:46.633020] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:21:58.423 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:21:58.423 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:21:58.423 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:21:58.423 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:21:58.423 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:21:58.423 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:21:58.423 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:21:58.423 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:58.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:21:58.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:21:58.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:58.423 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:58.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:21:58.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:58.423 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:58.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:21:58.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:58.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:58.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:58.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:58.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:21:58.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:21:58.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:58.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:21:58.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:58.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:21:58.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:21:58.423 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:21:58.684 19:37:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:21:58.684 19:37:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:58.684 19:37:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:58.684 19:37:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:21:58.684 19:37:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:58.684 19:37:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:58.684 19:37:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:21:58.684 19:37:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:21:58.942 19:37:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:21:59.200 19:37:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:21:59.200 19:37:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:21:59.200 19:37:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:59.200 19:37:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:59.200 19:37:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:21:59.200 19:37:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:59.201 19:37:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:59.201 19:37:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:21:59.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:21:59.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:59.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:21:59.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:21:59.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:21:59.201 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:21:59.201 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:59.201 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:21:59.201 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:21:59.201 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:21:59.201 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:21:59.201 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:21:59.201 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:21:59.201 ' 00:22:04.475 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:22:04.475 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:22:04.475 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:04.475 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:22:04.475 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:22:04.475 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:22:04.476 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:22:04.476 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:04.476 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:22:04.476 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:22:04.476 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:22:04.476 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:22:04.476 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:22:04.476 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:22:04.476 19:37:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:22:04.476 19:37:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:04.476 19:37:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:04.476 19:37:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 96816 00:22:04.476 19:37:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 96816 ']' 00:22:04.476 19:37:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 96816 00:22:04.476 19:37:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:22:04.476 19:37:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:04.476 19:37:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96816 00:22:04.732 killing process with pid 96816 00:22:04.732 19:37:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:04.732 19:37:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:04.732 19:37:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96816' 00:22:04.732 19:37:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 96816 00:22:04.732 19:37:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 96816 00:22:04.732 19:37:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:22:04.732 19:37:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:22:04.732 19:37:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 96816 ']' 00:22:04.732 19:37:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 96816 00:22:04.732 19:37:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 96816 ']' 00:22:04.732 19:37:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 96816 00:22:04.732 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (96816) - No such process 00:22:04.732 Process with pid 96816 is not found 00:22:04.732 19:37:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 96816 is not found' 00:22:04.732 19:37:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:22:04.732 19:37:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:22:04.732 19:37:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:22:04.732 00:22:04.732 real 0m16.637s 00:22:04.732 user 0m36.039s 00:22:04.732 sys 0m0.767s 00:22:04.732 19:37:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:04.732 ************************************ 00:22:04.732 END TEST spdkcli_nvmf_tcp 00:22:04.732 ************************************ 00:22:04.732 19:37:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:04.732 19:37:54 -- common/autotest_common.sh@1142 -- # return 0 00:22:04.732 19:37:54 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:04.732 19:37:54 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:04.732 19:37:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:04.732 19:37:54 -- common/autotest_common.sh@10 -- # set +x 00:22:04.732 ************************************ 00:22:04.732 START TEST nvmf_identify_passthru 00:22:04.732 ************************************ 00:22:04.732 19:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:04.990 * Looking for test storage... 00:22:04.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:04.990 19:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:04.990 19:37:54 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.990 19:37:54 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.990 19:37:54 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.990 19:37:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.990 19:37:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.990 19:37:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.990 19:37:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:22:04.990 19:37:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:04.990 19:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:04.990 19:37:54 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.990 19:37:54 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.990 19:37:54 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.990 19:37:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.990 19:37:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.990 19:37:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.990 19:37:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:22:04.990 19:37:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.990 19:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:04.990 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.991 19:37:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:04.991 19:37:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:04.991 Cannot find device "nvmf_tgt_br" 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:04.991 Cannot find device "nvmf_tgt_br2" 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:04.991 Cannot find device "nvmf_tgt_br" 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:04.991 Cannot find device "nvmf_tgt_br2" 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:04.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:04.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:04.991 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:05.249 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:05.249 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:05.249 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:05.249 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:05.249 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:05.249 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:05.249 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:05.249 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:05.249 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:05.249 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:05.249 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:05.249 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:05.249 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:05.249 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:05.249 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:05.249 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:05.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:22:05.249 00:22:05.250 --- 10.0.0.2 ping statistics --- 00:22:05.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.250 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:22:05.250 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:05.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:05.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:22:05.250 00:22:05.250 --- 10.0.0.3 ping statistics --- 00:22:05.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.250 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:22:05.250 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:05.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:22:05.250 00:22:05.250 --- 10.0.0.1 ping statistics --- 00:22:05.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.250 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:22:05.250 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.250 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:22:05.250 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:05.250 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.250 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:05.250 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:05.250 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.250 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:05.250 19:37:54 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:05.250 19:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:22:05.250 19:37:54 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:05.250 19:37:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:05.250 19:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:22:05.250 19:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:22:05.250 19:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:22:05.250 19:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:22:05.250 19:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:22:05.250 19:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:22:05.250 19:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:22:05.250 19:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:05.250 19:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:05.250 19:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:22:05.250 19:37:55 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:22:05.250 19:37:55 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:05.250 19:37:55 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:22:05.250 19:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:22:05.250 19:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:22:05.250 19:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:22:05.250 19:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:22:05.250 19:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:22:05.507 19:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:22:05.507 19:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:22:05.507 19:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:22:05.507 19:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:22:05.766 19:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:22:05.766 19:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:22:05.766 19:37:55 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:05.766 19:37:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:05.766 19:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:22:05.766 19:37:55 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:05.766 19:37:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:05.766 19:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=97302 00:22:05.766 19:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:05.766 19:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 97302 00:22:05.766 19:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:05.766 19:37:55 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 97302 ']' 00:22:05.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.766 19:37:55 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.766 19:37:55 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.766 19:37:55 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.766 19:37:55 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.766 19:37:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:05.766 [2024-07-15 19:37:55.468280] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:22:05.766 [2024-07-15 19:37:55.468403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.025 [2024-07-15 19:37:55.611103] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:06.025 [2024-07-15 19:37:55.685916] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.025 [2024-07-15 19:37:55.686390] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.025 [2024-07-15 19:37:55.686645] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.025 [2024-07-15 19:37:55.686904] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.025 [2024-07-15 19:37:55.687120] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.025 [2024-07-15 19:37:55.687443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.025 [2024-07-15 19:37:55.687481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.025 [2024-07-15 19:37:55.687558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.025 [2024-07-15 19:37:55.687563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:22:06.960 19:37:56 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.960 19:37:56 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:06.960 [2024-07-15 19:37:56.558666] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.960 19:37:56 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:06.960 [2024-07-15 19:37:56.572685] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.960 19:37:56 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:06.960 19:37:56 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:06.960 Nvme0n1 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.960 19:37:56 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.960 19:37:56 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.960 19:37:56 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:06.960 [2024-07-15 19:37:56.721022] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.960 19:37:56 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:06.960 [ 00:22:06.960 { 00:22:06.960 "allow_any_host": true, 00:22:06.960 "hosts": [], 00:22:06.960 "listen_addresses": [], 00:22:06.960 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:06.960 "subtype": "Discovery" 00:22:06.960 }, 00:22:06.960 { 00:22:06.960 "allow_any_host": true, 00:22:06.960 "hosts": [], 00:22:06.960 "listen_addresses": [ 00:22:06.960 { 00:22:06.960 "adrfam": "IPv4", 00:22:06.960 "traddr": "10.0.0.2", 00:22:06.960 "trsvcid": "4420", 00:22:06.960 "trtype": "TCP" 00:22:06.960 } 00:22:06.960 ], 00:22:06.960 "max_cntlid": 65519, 00:22:06.960 "max_namespaces": 1, 00:22:06.960 "min_cntlid": 1, 00:22:06.960 "model_number": "SPDK bdev Controller", 00:22:06.960 "namespaces": [ 00:22:06.960 { 00:22:06.960 "bdev_name": "Nvme0n1", 00:22:06.960 "name": "Nvme0n1", 00:22:06.960 "nguid": "67C7B99A799F47D08A85DA8CC729217A", 00:22:06.960 "nsid": 1, 00:22:06.960 "uuid": "67c7b99a-799f-47d0-8a85-da8cc729217a" 00:22:06.960 } 00:22:06.960 ], 00:22:06.960 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.960 "serial_number": "SPDK00000000000001", 00:22:06.960 "subtype": "NVMe" 00:22:06.960 } 00:22:06.960 ] 00:22:06.960 19:37:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.960 19:37:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:22:06.960 19:37:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:06.960 19:37:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:22:07.217 19:37:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:22:07.217 19:37:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:07.217 19:37:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:22:07.217 19:37:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:22:07.476 19:37:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:22:07.476 19:37:57 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:22:07.476 19:37:57 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:22:07.476 19:37:57 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:07.476 19:37:57 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.476 19:37:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:07.476 19:37:57 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.476 19:37:57 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:22:07.476 19:37:57 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:22:07.476 19:37:57 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:07.476 19:37:57 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:22:07.476 19:37:57 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:07.476 19:37:57 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:22:07.476 19:37:57 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:07.476 19:37:57 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:07.476 rmmod nvme_tcp 00:22:07.476 rmmod nvme_fabrics 00:22:07.476 rmmod nvme_keyring 00:22:07.733 19:37:57 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:07.733 19:37:57 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:22:07.733 19:37:57 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:22:07.733 19:37:57 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 97302 ']' 00:22:07.733 19:37:57 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 97302 00:22:07.733 19:37:57 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 97302 ']' 00:22:07.733 19:37:57 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 97302 00:22:07.733 19:37:57 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:22:07.733 19:37:57 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:07.733 19:37:57 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97302 00:22:07.733 killing process with pid 97302 00:22:07.733 19:37:57 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:07.733 19:37:57 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:07.733 19:37:57 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97302' 00:22:07.733 19:37:57 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 97302 00:22:07.733 19:37:57 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 97302 00:22:07.733 19:37:57 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:07.733 19:37:57 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:07.733 19:37:57 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:07.733 19:37:57 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:07.733 19:37:57 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:07.733 19:37:57 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.733 19:37:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:07.733 19:37:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.733 19:37:57 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:07.733 ************************************ 00:22:07.733 END TEST nvmf_identify_passthru 00:22:07.733 ************************************ 00:22:07.733 00:22:07.733 real 0m3.015s 00:22:07.733 user 0m7.739s 00:22:07.733 sys 0m0.737s 00:22:07.733 19:37:57 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:07.733 19:37:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:07.990 19:37:57 -- common/autotest_common.sh@1142 -- # return 0 00:22:07.990 19:37:57 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:07.990 19:37:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:07.990 19:37:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:07.990 19:37:57 -- common/autotest_common.sh@10 -- # set +x 00:22:07.990 ************************************ 00:22:07.990 START TEST nvmf_dif 00:22:07.990 ************************************ 00:22:07.990 19:37:57 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:07.990 * Looking for test storage... 00:22:07.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:07.990 19:37:57 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:07.990 19:37:57 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.990 19:37:57 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.990 19:37:57 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.990 19:37:57 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.990 19:37:57 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.990 19:37:57 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.990 19:37:57 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:22:07.990 19:37:57 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:07.990 19:37:57 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:22:07.990 19:37:57 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:22:07.990 19:37:57 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:22:07.990 19:37:57 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:22:07.990 19:37:57 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.990 19:37:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:07.990 19:37:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:07.990 Cannot find device "nvmf_tgt_br" 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@155 -- # true 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:07.990 Cannot find device "nvmf_tgt_br2" 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@156 -- # true 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:07.990 Cannot find device "nvmf_tgt_br" 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@158 -- # true 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:07.990 Cannot find device "nvmf_tgt_br2" 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@159 -- # true 00:22:07.990 19:37:57 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:08.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@162 -- # true 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:08.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@163 -- # true 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:08.249 19:37:57 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:08.249 19:37:58 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:08.249 19:37:58 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:08.249 19:37:58 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:08.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:22:08.249 00:22:08.249 --- 10.0.0.2 ping statistics --- 00:22:08.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.249 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:22:08.249 19:37:58 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:08.249 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:08.249 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:22:08.249 00:22:08.249 --- 10.0.0.3 ping statistics --- 00:22:08.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.249 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:22:08.249 19:37:58 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:08.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:22:08.249 00:22:08.249 --- 10.0.0.1 ping statistics --- 00:22:08.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.249 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:22:08.249 19:37:58 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.249 19:37:58 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:22:08.249 19:37:58 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:22:08.249 19:37:58 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:08.813 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:08.813 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:08.813 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:08.813 19:37:58 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.813 19:37:58 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:08.813 19:37:58 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:08.813 19:37:58 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.813 19:37:58 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:08.813 19:37:58 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:08.813 19:37:58 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:22:08.813 19:37:58 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:22:08.813 19:37:58 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:08.813 19:37:58 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:08.813 19:37:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:08.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.813 19:37:58 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=97645 00:22:08.813 19:37:58 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 97645 00:22:08.813 19:37:58 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 97645 ']' 00:22:08.813 19:37:58 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:08.814 19:37:58 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.814 19:37:58 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:08.814 19:37:58 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.814 19:37:58 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:08.814 19:37:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:08.814 [2024-07-15 19:37:58.485334] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:22:08.814 [2024-07-15 19:37:58.485456] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.070 [2024-07-15 19:37:58.627416] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.070 [2024-07-15 19:37:58.696687] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.070 [2024-07-15 19:37:58.696753] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.070 [2024-07-15 19:37:58.696767] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.070 [2024-07-15 19:37:58.696777] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.070 [2024-07-15 19:37:58.696785] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.070 [2024-07-15 19:37:58.696813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.004 19:37:59 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:10.004 19:37:59 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:22:10.004 19:37:59 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:10.004 19:37:59 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:10.004 19:37:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:10.004 19:37:59 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.004 19:37:59 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:22:10.004 19:37:59 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:22:10.004 19:37:59 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.004 19:37:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:10.004 [2024-07-15 19:37:59.495047] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.004 19:37:59 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.004 19:37:59 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:22:10.004 19:37:59 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:10.004 19:37:59 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:10.004 19:37:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:10.004 ************************************ 00:22:10.004 START TEST fio_dif_1_default 00:22:10.004 ************************************ 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:10.004 bdev_null0 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:10.004 [2024-07-15 19:37:59.551175] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:10.004 { 00:22:10.004 "params": { 00:22:10.004 "name": "Nvme$subsystem", 00:22:10.004 "trtype": "$TEST_TRANSPORT", 00:22:10.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.004 "adrfam": "ipv4", 00:22:10.004 "trsvcid": "$NVMF_PORT", 00:22:10.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.004 "hdgst": ${hdgst:-false}, 00:22:10.004 "ddgst": ${ddgst:-false} 00:22:10.004 }, 00:22:10.004 "method": "bdev_nvme_attach_controller" 00:22:10.004 } 00:22:10.004 EOF 00:22:10.004 )") 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:10.004 "params": { 00:22:10.004 "name": "Nvme0", 00:22:10.004 "trtype": "tcp", 00:22:10.004 "traddr": "10.0.0.2", 00:22:10.004 "adrfam": "ipv4", 00:22:10.004 "trsvcid": "4420", 00:22:10.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:10.004 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:10.004 "hdgst": false, 00:22:10.004 "ddgst": false 00:22:10.004 }, 00:22:10.004 "method": "bdev_nvme_attach_controller" 00:22:10.004 }' 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:10.004 19:37:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:10.004 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:10.004 fio-3.35 00:22:10.004 Starting 1 thread 00:22:22.198 00:22:22.198 filename0: (groupid=0, jobs=1): err= 0: pid=97728: Mon Jul 15 19:38:10 2024 00:22:22.198 read: IOPS=630, BW=2523KiB/s (2583kB/s)(24.7MiB/10015msec) 00:22:22.198 slat (nsec): min=6810, max=75980, avg=9473.09, stdev=4348.48 00:22:22.198 clat (usec): min=425, max=42572, avg=6313.54, stdev=14223.35 00:22:22.198 lat (usec): min=433, max=42582, avg=6323.01, stdev=14223.32 00:22:22.198 clat percentiles (usec): 00:22:22.198 | 1.00th=[ 453], 5.00th=[ 461], 10.00th=[ 469], 20.00th=[ 478], 00:22:22.198 | 30.00th=[ 482], 40.00th=[ 490], 50.00th=[ 498], 60.00th=[ 506], 00:22:22.198 | 70.00th=[ 519], 80.00th=[ 545], 90.00th=[40633], 95.00th=[41157], 00:22:22.198 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:22:22.198 | 99.99th=[42730] 00:22:22.198 bw ( KiB/s): min= 1120, max= 4064, per=100.00%, avg=2524.50, stdev=725.41, samples=20 00:22:22.198 iops : min= 280, max= 1016, avg=631.10, stdev=181.35, samples=20 00:22:22.198 lat (usec) : 500=52.64%, 750=32.98% 00:22:22.198 lat (msec) : 10=0.06%, 50=14.31% 00:22:22.198 cpu : usr=91.32%, sys=7.91%, ctx=27, majf=0, minf=9 00:22:22.198 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:22.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.198 issued rwts: total=6316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.198 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:22.198 00:22:22.198 Run status group 0 (all jobs): 00:22:22.198 READ: bw=2523KiB/s (2583kB/s), 2523KiB/s-2523KiB/s (2583kB/s-2583kB/s), io=24.7MiB (25.9MB), run=10015-10015msec 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:22.198 ************************************ 00:22:22.198 END TEST fio_dif_1_default 00:22:22.198 ************************************ 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.198 00:22:22.198 real 0m10.959s 00:22:22.198 user 0m9.763s 00:22:22.198 sys 0m1.014s 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:22.198 19:38:10 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:22:22.198 19:38:10 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:22:22.198 19:38:10 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:22.198 19:38:10 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:22.198 19:38:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:22.198 ************************************ 00:22:22.198 START TEST fio_dif_1_multi_subsystems 00:22:22.198 ************************************ 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:22.198 bdev_null0 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:22.198 [2024-07-15 19:38:10.549922] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:22.198 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:22.199 bdev_null1 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:22.199 { 00:22:22.199 "params": { 00:22:22.199 "name": "Nvme$subsystem", 00:22:22.199 "trtype": "$TEST_TRANSPORT", 00:22:22.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.199 "adrfam": "ipv4", 00:22:22.199 "trsvcid": "$NVMF_PORT", 00:22:22.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.199 "hdgst": ${hdgst:-false}, 00:22:22.199 "ddgst": ${ddgst:-false} 00:22:22.199 }, 00:22:22.199 "method": "bdev_nvme_attach_controller" 00:22:22.199 } 00:22:22.199 EOF 00:22:22.199 )") 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:22.199 { 00:22:22.199 "params": { 00:22:22.199 "name": "Nvme$subsystem", 00:22:22.199 "trtype": "$TEST_TRANSPORT", 00:22:22.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.199 "adrfam": "ipv4", 00:22:22.199 "trsvcid": "$NVMF_PORT", 00:22:22.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.199 "hdgst": ${hdgst:-false}, 00:22:22.199 "ddgst": ${ddgst:-false} 00:22:22.199 }, 00:22:22.199 "method": "bdev_nvme_attach_controller" 00:22:22.199 } 00:22:22.199 EOF 00:22:22.199 )") 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:22.199 "params": { 00:22:22.199 "name": "Nvme0", 00:22:22.199 "trtype": "tcp", 00:22:22.199 "traddr": "10.0.0.2", 00:22:22.199 "adrfam": "ipv4", 00:22:22.199 "trsvcid": "4420", 00:22:22.199 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:22.199 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:22.199 "hdgst": false, 00:22:22.199 "ddgst": false 00:22:22.199 }, 00:22:22.199 "method": "bdev_nvme_attach_controller" 00:22:22.199 },{ 00:22:22.199 "params": { 00:22:22.199 "name": "Nvme1", 00:22:22.199 "trtype": "tcp", 00:22:22.199 "traddr": "10.0.0.2", 00:22:22.199 "adrfam": "ipv4", 00:22:22.199 "trsvcid": "4420", 00:22:22.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:22.199 "hdgst": false, 00:22:22.199 "ddgst": false 00:22:22.199 }, 00:22:22.199 "method": "bdev_nvme_attach_controller" 00:22:22.199 }' 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:22.199 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:22.200 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:22.200 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:22.200 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:22.200 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:22.200 19:38:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:22.200 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:22.200 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:22.200 fio-3.35 00:22:22.200 Starting 2 threads 00:22:32.197 00:22:32.197 filename0: (groupid=0, jobs=1): err= 0: pid=97883: Mon Jul 15 19:38:21 2024 00:22:32.197 read: IOPS=231, BW=928KiB/s (950kB/s)(9280KiB/10004msec) 00:22:32.197 slat (nsec): min=7808, max=57469, avg=11855.66, stdev=7919.78 00:22:32.197 clat (usec): min=455, max=42623, avg=17209.48, stdev=19894.73 00:22:32.197 lat (usec): min=463, max=42637, avg=17221.33, stdev=19894.49 00:22:32.197 clat percentiles (usec): 00:22:32.197 | 1.00th=[ 469], 5.00th=[ 482], 10.00th=[ 494], 20.00th=[ 510], 00:22:32.197 | 30.00th=[ 529], 40.00th=[ 635], 50.00th=[ 1123], 60.00th=[40633], 00:22:32.197 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:22:32.197 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:22:32.197 | 99.99th=[42730] 00:22:32.197 bw ( KiB/s): min= 448, max= 2432, per=48.84%, avg=941.47, stdev=475.27, samples=19 00:22:32.197 iops : min= 112, max= 608, avg=235.37, stdev=118.82, samples=19 00:22:32.197 lat (usec) : 500=14.57%, 750=29.57%, 1000=4.48% 00:22:32.197 lat (msec) : 2=10.34%, 10=0.17%, 50=40.86% 00:22:32.197 cpu : usr=95.30%, sys=4.22%, ctx=24, majf=0, minf=0 00:22:32.198 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:32.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.198 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.198 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:32.198 filename1: (groupid=0, jobs=1): err= 0: pid=97884: Mon Jul 15 19:38:21 2024 00:22:32.198 read: IOPS=250, BW=1002KiB/s (1026kB/s)(9.81MiB/10031msec) 00:22:32.198 slat (nsec): min=5516, max=55092, avg=10357.26, stdev=5030.49 00:22:32.198 clat (usec): min=455, max=42053, avg=15939.82, stdev=19593.28 00:22:32.198 lat (usec): min=463, max=42068, avg=15950.17, stdev=19593.50 00:22:32.198 clat percentiles (usec): 00:22:32.198 | 1.00th=[ 469], 5.00th=[ 482], 10.00th=[ 498], 20.00th=[ 523], 00:22:32.198 | 30.00th=[ 611], 40.00th=[ 644], 50.00th=[ 848], 60.00th=[ 1172], 00:22:32.198 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:22:32.198 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:22:32.198 | 99.99th=[42206] 00:22:32.198 bw ( KiB/s): min= 576, max= 3584, per=52.05%, avg=1003.20, stdev=846.66, samples=20 00:22:32.198 iops : min= 144, max= 896, avg=250.80, stdev=211.66, samples=20 00:22:32.198 lat (usec) : 500=11.86%, 750=37.42%, 1000=2.95% 00:22:32.198 lat (msec) : 2=9.87%, 10=0.16%, 50=37.74% 00:22:32.198 cpu : usr=95.44%, sys=4.10%, ctx=18, majf=0, minf=9 00:22:32.198 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:32.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.198 issued rwts: total=2512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.198 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:32.198 00:22:32.198 Run status group 0 (all jobs): 00:22:32.198 READ: bw=1927KiB/s (1973kB/s), 928KiB/s-1002KiB/s (950kB/s-1026kB/s), io=18.9MiB (19.8MB), run=10004-10031msec 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:32.198 ************************************ 00:22:32.198 END TEST fio_dif_1_multi_subsystems 00:22:32.198 ************************************ 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.198 00:22:32.198 real 0m11.076s 00:22:32.198 user 0m19.839s 00:22:32.198 sys 0m1.043s 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:32.198 19:38:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:32.198 19:38:21 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:22:32.198 19:38:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:22:32.198 19:38:21 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:32.198 19:38:21 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:32.198 19:38:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:32.198 ************************************ 00:22:32.198 START TEST fio_dif_rand_params 00:22:32.198 ************************************ 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:32.198 bdev_null0 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:32.198 [2024-07-15 19:38:21.676848] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:32.198 19:38:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:32.198 { 00:22:32.198 "params": { 00:22:32.198 "name": "Nvme$subsystem", 00:22:32.198 "trtype": "$TEST_TRANSPORT", 00:22:32.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.198 "adrfam": "ipv4", 00:22:32.198 "trsvcid": "$NVMF_PORT", 00:22:32.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.198 "hdgst": ${hdgst:-false}, 00:22:32.198 "ddgst": ${ddgst:-false} 00:22:32.198 }, 00:22:32.198 "method": "bdev_nvme_attach_controller" 00:22:32.198 } 00:22:32.199 EOF 00:22:32.199 )") 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:32.199 "params": { 00:22:32.199 "name": "Nvme0", 00:22:32.199 "trtype": "tcp", 00:22:32.199 "traddr": "10.0.0.2", 00:22:32.199 "adrfam": "ipv4", 00:22:32.199 "trsvcid": "4420", 00:22:32.199 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:32.199 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:32.199 "hdgst": false, 00:22:32.199 "ddgst": false 00:22:32.199 }, 00:22:32.199 "method": "bdev_nvme_attach_controller" 00:22:32.199 }' 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:32.199 19:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:32.199 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:32.199 ... 00:22:32.199 fio-3.35 00:22:32.199 Starting 3 threads 00:22:38.758 00:22:38.758 filename0: (groupid=0, jobs=1): err= 0: pid=98039: Mon Jul 15 19:38:27 2024 00:22:38.758 read: IOPS=203, BW=25.4MiB/s (26.6MB/s)(127MiB/5006msec) 00:22:38.758 slat (nsec): min=6672, max=51396, avg=15002.89, stdev=4687.22 00:22:38.758 clat (usec): min=4370, max=22915, avg=14740.64, stdev=2817.25 00:22:38.758 lat (usec): min=4380, max=22932, avg=14755.64, stdev=2817.38 00:22:38.758 clat percentiles (usec): 00:22:38.758 | 1.00th=[ 4424], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[14091], 00:22:38.758 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15533], 60.00th=[15664], 00:22:38.758 | 70.00th=[16057], 80.00th=[16319], 90.00th=[16909], 95.00th=[17171], 00:22:38.758 | 99.00th=[20055], 99.50th=[21365], 99.90th=[22676], 99.95th=[22938], 00:22:38.758 | 99.99th=[22938] 00:22:38.758 bw ( KiB/s): min=23040, max=34491, per=29.52%, avg=25982.10, stdev=3122.76, samples=10 00:22:38.758 iops : min= 180, max= 269, avg=202.90, stdev=24.27, samples=10 00:22:38.758 lat (msec) : 10=10.13%, 20=88.89%, 50=0.98% 00:22:38.758 cpu : usr=91.53%, sys=6.75%, ctx=6, majf=0, minf=0 00:22:38.758 IO depths : 1=5.2%, 2=94.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:38.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.759 issued rwts: total=1017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.759 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:38.759 filename0: (groupid=0, jobs=1): err= 0: pid=98040: Mon Jul 15 19:38:27 2024 00:22:38.759 read: IOPS=231, BW=28.9MiB/s (30.4MB/s)(145MiB/5005msec) 00:22:38.759 slat (nsec): min=7778, max=37174, avg=13413.26, stdev=4525.26 00:22:38.759 clat (usec): min=6376, max=53810, avg=12931.37, stdev=4922.43 00:22:38.759 lat (usec): min=6390, max=53826, avg=12944.78, stdev=4922.75 00:22:38.759 clat percentiles (usec): 00:22:38.759 | 1.00th=[ 6783], 5.00th=[ 7898], 10.00th=[10028], 20.00th=[11731], 00:22:38.759 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12780], 60.00th=[12911], 00:22:38.759 | 70.00th=[13304], 80.00th=[13566], 90.00th=[14091], 95.00th=[15008], 00:22:38.759 | 99.00th=[52167], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 00:22:38.759 | 99.99th=[53740] 00:22:38.759 bw ( KiB/s): min=25344, max=34629, per=33.63%, avg=29594.80, stdev=2667.30, samples=10 00:22:38.759 iops : min= 198, max= 270, avg=231.10, stdev=20.75, samples=10 00:22:38.759 lat (msec) : 10=9.92%, 20=88.78%, 100=1.29% 00:22:38.759 cpu : usr=92.25%, sys=6.14%, ctx=9, majf=0, minf=0 00:22:38.759 IO depths : 1=5.9%, 2=94.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:38.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.759 issued rwts: total=1159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.759 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:38.759 filename0: (groupid=0, jobs=1): err= 0: pid=98041: Mon Jul 15 19:38:27 2024 00:22:38.759 read: IOPS=252, BW=31.6MiB/s (33.1MB/s)(158MiB/5006msec) 00:22:38.759 slat (nsec): min=6408, max=44921, avg=13785.28, stdev=4099.96 00:22:38.759 clat (usec): min=6721, max=54221, avg=11843.39, stdev=5757.37 00:22:38.759 lat (usec): min=6733, max=54243, avg=11857.17, stdev=5758.27 00:22:38.759 clat percentiles (usec): 00:22:38.759 | 1.00th=[ 7570], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[10421], 00:22:38.759 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:22:38.759 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12256], 95.00th=[13566], 00:22:38.759 | 99.00th=[51643], 99.50th=[53216], 99.90th=[53216], 99.95th=[54264], 00:22:38.759 | 99.99th=[54264] 00:22:38.759 bw ( KiB/s): min=19968, max=36864, per=36.73%, avg=32325.70, stdev=5371.04, samples=10 00:22:38.759 iops : min= 156, max= 288, avg=252.50, stdev=41.93, samples=10 00:22:38.759 lat (msec) : 10=11.37%, 20=86.73%, 100=1.90% 00:22:38.759 cpu : usr=91.43%, sys=6.61%, ctx=36, majf=0, minf=0 00:22:38.759 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:38.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.759 issued rwts: total=1266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.759 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:38.759 00:22:38.759 Run status group 0 (all jobs): 00:22:38.759 READ: bw=85.9MiB/s (90.1MB/s), 25.4MiB/s-31.6MiB/s (26.6MB/s-33.1MB/s), io=430MiB (451MB), run=5005-5006msec 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.759 bdev_null0 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.759 [2024-07-15 19:38:27.592619] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.759 bdev_null1 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.759 bdev_null2 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.759 19:38:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.759 { 00:22:38.759 "params": { 00:22:38.759 "name": "Nvme$subsystem", 00:22:38.759 "trtype": "$TEST_TRANSPORT", 00:22:38.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.759 "adrfam": "ipv4", 00:22:38.759 "trsvcid": "$NVMF_PORT", 00:22:38.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.760 "hdgst": ${hdgst:-false}, 00:22:38.760 "ddgst": ${ddgst:-false} 00:22:38.760 }, 00:22:38.760 "method": "bdev_nvme_attach_controller" 00:22:38.760 } 00:22:38.760 EOF 00:22:38.760 )") 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.760 { 00:22:38.760 "params": { 00:22:38.760 "name": "Nvme$subsystem", 00:22:38.760 "trtype": "$TEST_TRANSPORT", 00:22:38.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.760 "adrfam": "ipv4", 00:22:38.760 "trsvcid": "$NVMF_PORT", 00:22:38.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.760 "hdgst": ${hdgst:-false}, 00:22:38.760 "ddgst": ${ddgst:-false} 00:22:38.760 }, 00:22:38.760 "method": "bdev_nvme_attach_controller" 00:22:38.760 } 00:22:38.760 EOF 00:22:38.760 )") 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.760 { 00:22:38.760 "params": { 00:22:38.760 "name": "Nvme$subsystem", 00:22:38.760 "trtype": "$TEST_TRANSPORT", 00:22:38.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.760 "adrfam": "ipv4", 00:22:38.760 "trsvcid": "$NVMF_PORT", 00:22:38.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.760 "hdgst": ${hdgst:-false}, 00:22:38.760 "ddgst": ${ddgst:-false} 00:22:38.760 }, 00:22:38.760 "method": "bdev_nvme_attach_controller" 00:22:38.760 } 00:22:38.760 EOF 00:22:38.760 )") 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:38.760 "params": { 00:22:38.760 "name": "Nvme0", 00:22:38.760 "trtype": "tcp", 00:22:38.760 "traddr": "10.0.0.2", 00:22:38.760 "adrfam": "ipv4", 00:22:38.760 "trsvcid": "4420", 00:22:38.760 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:38.760 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:38.760 "hdgst": false, 00:22:38.760 "ddgst": false 00:22:38.760 }, 00:22:38.760 "method": "bdev_nvme_attach_controller" 00:22:38.760 },{ 00:22:38.760 "params": { 00:22:38.760 "name": "Nvme1", 00:22:38.760 "trtype": "tcp", 00:22:38.760 "traddr": "10.0.0.2", 00:22:38.760 "adrfam": "ipv4", 00:22:38.760 "trsvcid": "4420", 00:22:38.760 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.760 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:38.760 "hdgst": false, 00:22:38.760 "ddgst": false 00:22:38.760 }, 00:22:38.760 "method": "bdev_nvme_attach_controller" 00:22:38.760 },{ 00:22:38.760 "params": { 00:22:38.760 "name": "Nvme2", 00:22:38.760 "trtype": "tcp", 00:22:38.760 "traddr": "10.0.0.2", 00:22:38.760 "adrfam": "ipv4", 00:22:38.760 "trsvcid": "4420", 00:22:38.760 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:38.760 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:38.760 "hdgst": false, 00:22:38.760 "ddgst": false 00:22:38.760 }, 00:22:38.760 "method": "bdev_nvme_attach_controller" 00:22:38.760 }' 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:38.760 19:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:38.760 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:38.760 ... 00:22:38.760 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:38.760 ... 00:22:38.760 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:38.760 ... 00:22:38.760 fio-3.35 00:22:38.760 Starting 24 threads 00:22:50.954 00:22:50.954 filename0: (groupid=0, jobs=1): err= 0: pid=98132: Mon Jul 15 19:38:38 2024 00:22:50.954 read: IOPS=193, BW=773KiB/s (792kB/s)(7764KiB/10043msec) 00:22:50.954 slat (usec): min=7, max=7211, avg=15.21, stdev=163.49 00:22:50.954 clat (msec): min=30, max=183, avg=82.58, stdev=25.31 00:22:50.954 lat (msec): min=30, max=183, avg=82.60, stdev=25.31 00:22:50.954 clat percentiles (msec): 00:22:50.954 | 1.00th=[ 42], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 61], 00:22:50.954 | 30.00th=[ 70], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 85], 00:22:50.954 | 70.00th=[ 91], 80.00th=[ 102], 90.00th=[ 121], 95.00th=[ 131], 00:22:50.954 | 99.00th=[ 150], 99.50th=[ 165], 99.90th=[ 184], 99.95th=[ 184], 00:22:50.954 | 99.99th=[ 184] 00:22:50.954 bw ( KiB/s): min= 560, max= 992, per=4.29%, avg=770.00, stdev=121.35, samples=20 00:22:50.954 iops : min= 140, max= 248, avg=192.50, stdev=30.34, samples=20 00:22:50.954 lat (msec) : 50=8.91%, 100=70.48%, 250=20.61% 00:22:50.954 cpu : usr=40.17%, sys=1.30%, ctx=1290, majf=0, minf=9 00:22:50.954 IO depths : 1=1.0%, 2=2.2%, 4=11.0%, 8=73.6%, 16=12.3%, 32=0.0%, >=64=0.0% 00:22:50.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.954 complete : 0=0.0%, 4=90.3%, 8=4.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.954 issued rwts: total=1941,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.954 filename0: (groupid=0, jobs=1): err= 0: pid=98133: Mon Jul 15 19:38:38 2024 00:22:50.954 read: IOPS=165, BW=663KiB/s (679kB/s)(6644KiB/10019msec) 00:22:50.954 slat (usec): min=7, max=3903, avg=17.69, stdev=130.25 00:22:50.954 clat (msec): min=39, max=261, avg=96.30, stdev=33.10 00:22:50.954 lat (msec): min=39, max=261, avg=96.31, stdev=33.11 00:22:50.954 clat percentiles (msec): 00:22:50.954 | 1.00th=[ 43], 5.00th=[ 56], 10.00th=[ 64], 20.00th=[ 72], 00:22:50.954 | 30.00th=[ 77], 40.00th=[ 81], 50.00th=[ 89], 60.00th=[ 97], 00:22:50.954 | 70.00th=[ 110], 80.00th=[ 120], 90.00th=[ 138], 95.00th=[ 159], 00:22:50.954 | 99.00th=[ 224], 99.50th=[ 226], 99.90th=[ 262], 99.95th=[ 262], 00:22:50.954 | 99.99th=[ 262] 00:22:50.954 bw ( KiB/s): min= 384, max= 952, per=3.67%, avg=658.00, stdev=168.05, samples=20 00:22:50.954 iops : min= 96, max= 238, avg=164.50, stdev=42.01, samples=20 00:22:50.954 lat (msec) : 50=3.85%, 100=57.38%, 250=38.47%, 500=0.30% 00:22:50.954 cpu : usr=41.55%, sys=1.43%, ctx=1171, majf=0, minf=9 00:22:50.954 IO depths : 1=3.8%, 2=8.1%, 4=19.5%, 8=59.8%, 16=8.8%, 32=0.0%, >=64=0.0% 00:22:50.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.954 complete : 0=0.0%, 4=92.4%, 8=1.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.954 issued rwts: total=1661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.954 filename0: (groupid=0, jobs=1): err= 0: pid=98134: Mon Jul 15 19:38:38 2024 00:22:50.954 read: IOPS=165, BW=662KiB/s (678kB/s)(6636KiB/10019msec) 00:22:50.954 slat (usec): min=4, max=8024, avg=21.68, stdev=278.14 00:22:50.954 clat (msec): min=45, max=190, avg=96.47, stdev=25.50 00:22:50.954 lat (msec): min=45, max=191, avg=96.49, stdev=25.51 00:22:50.954 clat percentiles (msec): 00:22:50.954 | 1.00th=[ 46], 5.00th=[ 58], 10.00th=[ 64], 20.00th=[ 73], 00:22:50.954 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 96], 60.00th=[ 105], 00:22:50.954 | 70.00th=[ 112], 80.00th=[ 118], 90.00th=[ 131], 95.00th=[ 144], 00:22:50.954 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 192], 99.95th=[ 192], 00:22:50.954 | 99.99th=[ 192] 00:22:50.954 bw ( KiB/s): min= 512, max= 896, per=3.66%, avg=657.20, stdev=120.13, samples=20 00:22:50.954 iops : min= 128, max= 224, avg=164.30, stdev=30.03, samples=20 00:22:50.954 lat (msec) : 50=4.04%, 100=53.10%, 250=42.86% 00:22:50.954 cpu : usr=35.35%, sys=1.06%, ctx=1081, majf=0, minf=9 00:22:50.954 IO depths : 1=3.9%, 2=8.1%, 4=19.1%, 8=60.0%, 16=8.9%, 32=0.0%, >=64=0.0% 00:22:50.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.954 complete : 0=0.0%, 4=92.3%, 8=2.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.954 issued rwts: total=1659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.954 filename0: (groupid=0, jobs=1): err= 0: pid=98135: Mon Jul 15 19:38:38 2024 00:22:50.954 read: IOPS=205, BW=824KiB/s (843kB/s)(8284KiB/10057msec) 00:22:50.954 slat (usec): min=3, max=8033, avg=18.06, stdev=197.17 00:22:50.954 clat (msec): min=22, max=168, avg=77.52, stdev=23.90 00:22:50.954 lat (msec): min=22, max=168, avg=77.53, stdev=23.90 00:22:50.954 clat percentiles (msec): 00:22:50.954 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:22:50.954 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 80], 00:22:50.954 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 121], 00:22:50.954 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 169], 00:22:50.954 | 99.99th=[ 169] 00:22:50.954 bw ( KiB/s): min= 640, max= 1158, per=4.58%, avg=821.35, stdev=127.46, samples=20 00:22:50.954 iops : min= 160, max= 289, avg=205.25, stdev=31.82, samples=20 00:22:50.954 lat (msec) : 50=14.05%, 100=69.82%, 250=16.13% 00:22:50.954 cpu : usr=36.60%, sys=1.21%, ctx=1056, majf=0, minf=9 00:22:50.954 IO depths : 1=0.6%, 2=1.4%, 4=7.4%, 8=77.4%, 16=13.1%, 32=0.0%, >=64=0.0% 00:22:50.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.954 complete : 0=0.0%, 4=89.3%, 8=6.4%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.954 issued rwts: total=2071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.954 filename0: (groupid=0, jobs=1): err= 0: pid=98136: Mon Jul 15 19:38:38 2024 00:22:50.954 read: IOPS=203, BW=813KiB/s (833kB/s)(8168KiB/10044msec) 00:22:50.954 slat (usec): min=6, max=8056, avg=25.49, stdev=281.08 00:22:50.954 clat (msec): min=32, max=178, avg=78.55, stdev=25.90 00:22:50.954 lat (msec): min=32, max=178, avg=78.57, stdev=25.90 00:22:50.954 clat percentiles (msec): 00:22:50.954 | 1.00th=[ 39], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:22:50.954 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 84], 00:22:50.954 | 70.00th=[ 94], 80.00th=[ 105], 90.00th=[ 110], 95.00th=[ 121], 00:22:50.954 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 180], 99.95th=[ 180], 00:22:50.954 | 99.99th=[ 180] 00:22:50.954 bw ( KiB/s): min= 544, max= 1200, per=4.52%, avg=810.40, stdev=181.60, samples=20 00:22:50.954 iops : min= 136, max= 300, avg=202.60, stdev=45.40, samples=20 00:22:50.954 lat (msec) : 50=17.09%, 100=62.29%, 250=20.62% 00:22:50.954 cpu : usr=38.43%, sys=1.23%, ctx=1075, majf=0, minf=9 00:22:50.954 IO depths : 1=1.8%, 2=3.7%, 4=11.5%, 8=71.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:22:50.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.954 complete : 0=0.0%, 4=90.4%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.954 issued rwts: total=2042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.954 filename0: (groupid=0, jobs=1): err= 0: pid=98137: Mon Jul 15 19:38:38 2024 00:22:50.954 read: IOPS=181, BW=725KiB/s (743kB/s)(7284KiB/10041msec) 00:22:50.954 slat (usec): min=3, max=8029, avg=24.85, stdev=325.05 00:22:50.954 clat (msec): min=42, max=200, avg=88.04, stdev=30.65 00:22:50.954 lat (msec): min=42, max=200, avg=88.07, stdev=30.66 00:22:50.954 clat percentiles (msec): 00:22:50.954 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:22:50.954 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 93], 00:22:50.954 | 70.00th=[ 100], 80.00th=[ 109], 90.00th=[ 136], 95.00th=[ 153], 00:22:50.954 | 99.00th=[ 169], 99.50th=[ 192], 99.90th=[ 201], 99.95th=[ 201], 00:22:50.954 | 99.99th=[ 201] 00:22:50.954 bw ( KiB/s): min= 512, max= 1120, per=4.03%, avg=722.00, stdev=159.32, samples=20 00:22:50.954 iops : min= 128, max= 280, avg=180.50, stdev=39.83, samples=20 00:22:50.954 lat (msec) : 50=13.34%, 100=57.44%, 250=29.21% 00:22:50.954 cpu : usr=34.94%, sys=1.25%, ctx=866, majf=0, minf=9 00:22:50.954 IO depths : 1=2.1%, 2=4.7%, 4=13.6%, 8=68.4%, 16=11.1%, 32=0.0%, >=64=0.0% 00:22:50.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.954 complete : 0=0.0%, 4=91.0%, 8=4.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.954 issued rwts: total=1821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.954 filename0: (groupid=0, jobs=1): err= 0: pid=98138: Mon Jul 15 19:38:38 2024 00:22:50.954 read: IOPS=184, BW=738KiB/s (756kB/s)(7408KiB/10037msec) 00:22:50.954 slat (usec): min=4, max=8032, avg=24.33, stdev=322.42 00:22:50.955 clat (msec): min=35, max=197, avg=86.46, stdev=26.71 00:22:50.955 lat (msec): min=35, max=197, avg=86.48, stdev=26.71 00:22:50.955 clat percentiles (msec): 00:22:50.955 | 1.00th=[ 45], 5.00th=[ 49], 10.00th=[ 58], 20.00th=[ 64], 00:22:50.955 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 88], 00:22:50.955 | 70.00th=[ 96], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 131], 00:22:50.955 | 99.00th=[ 169], 99.50th=[ 169], 99.90th=[ 199], 99.95th=[ 199], 00:22:50.955 | 99.99th=[ 199] 00:22:50.955 bw ( KiB/s): min= 512, max= 1000, per=4.10%, avg=736.75, stdev=136.60, samples=20 00:22:50.955 iops : min= 128, max= 250, avg=184.15, stdev=34.17, samples=20 00:22:50.955 lat (msec) : 50=5.94%, 100=66.36%, 250=27.70% 00:22:50.955 cpu : usr=35.65%, sys=1.14%, ctx=1145, majf=0, minf=9 00:22:50.955 IO depths : 1=1.5%, 2=3.1%, 4=10.1%, 8=73.2%, 16=12.1%, 32=0.0%, >=64=0.0% 00:22:50.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 issued rwts: total=1852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.955 filename0: (groupid=0, jobs=1): err= 0: pid=98139: Mon Jul 15 19:38:38 2024 00:22:50.955 read: IOPS=164, BW=659KiB/s (675kB/s)(6592KiB/10003msec) 00:22:50.955 slat (nsec): min=7816, max=49284, avg=11702.62, stdev=4967.87 00:22:50.955 clat (msec): min=2, max=249, avg=97.00, stdev=36.58 00:22:50.955 lat (msec): min=2, max=249, avg=97.01, stdev=36.58 00:22:50.955 clat percentiles (msec): 00:22:50.955 | 1.00th=[ 3], 5.00th=[ 50], 10.00th=[ 62], 20.00th=[ 72], 00:22:50.955 | 30.00th=[ 79], 40.00th=[ 87], 50.00th=[ 95], 60.00th=[ 102], 00:22:50.955 | 70.00th=[ 112], 80.00th=[ 121], 90.00th=[ 146], 95.00th=[ 165], 00:22:50.955 | 99.00th=[ 205], 99.50th=[ 215], 99.90th=[ 251], 99.95th=[ 251], 00:22:50.955 | 99.99th=[ 251] 00:22:50.955 bw ( KiB/s): min= 384, max= 768, per=3.53%, avg=633.26, stdev=124.32, samples=19 00:22:50.955 iops : min= 96, max= 192, avg=158.32, stdev=31.08, samples=19 00:22:50.955 lat (msec) : 4=1.94%, 10=1.33%, 20=0.24%, 50=1.76%, 100=53.82% 00:22:50.955 lat (msec) : 250=40.90% 00:22:50.955 cpu : usr=40.37%, sys=1.27%, ctx=1152, majf=0, minf=9 00:22:50.955 IO depths : 1=3.7%, 2=7.9%, 4=18.4%, 8=60.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:22:50.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 complete : 0=0.0%, 4=92.3%, 8=2.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 issued rwts: total=1648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.955 filename1: (groupid=0, jobs=1): err= 0: pid=98140: Mon Jul 15 19:38:38 2024 00:22:50.955 read: IOPS=173, BW=693KiB/s (710kB/s)(6940KiB/10016msec) 00:22:50.955 slat (usec): min=3, max=4032, avg=14.65, stdev=96.63 00:22:50.955 clat (msec): min=29, max=233, avg=92.25, stdev=29.14 00:22:50.955 lat (msec): min=29, max=233, avg=92.26, stdev=29.14 00:22:50.955 clat percentiles (msec): 00:22:50.955 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 69], 00:22:50.955 | 30.00th=[ 73], 40.00th=[ 84], 50.00th=[ 93], 60.00th=[ 97], 00:22:50.955 | 70.00th=[ 107], 80.00th=[ 112], 90.00th=[ 132], 95.00th=[ 148], 00:22:50.955 | 99.00th=[ 165], 99.50th=[ 203], 99.90th=[ 234], 99.95th=[ 234], 00:22:50.955 | 99.99th=[ 234] 00:22:50.955 bw ( KiB/s): min= 472, max= 928, per=3.84%, avg=689.60, stdev=147.23, samples=20 00:22:50.955 iops : min= 118, max= 232, avg=172.40, stdev=36.81, samples=20 00:22:50.955 lat (msec) : 50=5.42%, 100=60.63%, 250=33.95% 00:22:50.955 cpu : usr=33.40%, sys=0.92%, ctx=1082, majf=0, minf=9 00:22:50.955 IO depths : 1=1.2%, 2=2.6%, 4=9.9%, 8=73.7%, 16=12.7%, 32=0.0%, >=64=0.0% 00:22:50.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 complete : 0=0.0%, 4=89.7%, 8=6.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 issued rwts: total=1735,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.955 filename1: (groupid=0, jobs=1): err= 0: pid=98141: Mon Jul 15 19:38:38 2024 00:22:50.955 read: IOPS=197, BW=790KiB/s (809kB/s)(7936KiB/10043msec) 00:22:50.955 slat (usec): min=6, max=8028, avg=19.45, stdev=254.45 00:22:50.955 clat (msec): min=23, max=168, avg=80.81, stdev=26.43 00:22:50.955 lat (msec): min=23, max=168, avg=80.83, stdev=26.43 00:22:50.955 clat percentiles (msec): 00:22:50.955 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 61], 00:22:50.955 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 83], 00:22:50.955 | 70.00th=[ 95], 80.00th=[ 105], 90.00th=[ 120], 95.00th=[ 129], 00:22:50.955 | 99.00th=[ 165], 99.50th=[ 165], 99.90th=[ 169], 99.95th=[ 169], 00:22:50.955 | 99.99th=[ 169] 00:22:50.955 bw ( KiB/s): min= 464, max= 992, per=4.39%, avg=787.20, stdev=140.60, samples=20 00:22:50.955 iops : min= 116, max= 248, avg=196.80, stdev=35.15, samples=20 00:22:50.955 lat (msec) : 50=11.34%, 100=66.58%, 250=22.08% 00:22:50.955 cpu : usr=40.19%, sys=1.31%, ctx=1028, majf=0, minf=9 00:22:50.955 IO depths : 1=1.4%, 2=2.8%, 4=10.0%, 8=73.6%, 16=12.2%, 32=0.0%, >=64=0.0% 00:22:50.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 complete : 0=0.0%, 4=89.9%, 8=5.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 issued rwts: total=1984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.955 filename1: (groupid=0, jobs=1): err= 0: pid=98142: Mon Jul 15 19:38:38 2024 00:22:50.955 read: IOPS=165, BW=661KiB/s (677kB/s)(6632KiB/10027msec) 00:22:50.955 slat (usec): min=4, max=8023, avg=17.17, stdev=196.83 00:22:50.955 clat (msec): min=28, max=189, avg=96.58, stdev=26.33 00:22:50.955 lat (msec): min=28, max=189, avg=96.60, stdev=26.31 00:22:50.955 clat percentiles (msec): 00:22:50.955 | 1.00th=[ 50], 5.00th=[ 66], 10.00th=[ 70], 20.00th=[ 74], 00:22:50.955 | 30.00th=[ 80], 40.00th=[ 84], 50.00th=[ 95], 60.00th=[ 104], 00:22:50.955 | 70.00th=[ 109], 80.00th=[ 116], 90.00th=[ 136], 95.00th=[ 146], 00:22:50.955 | 99.00th=[ 165], 99.50th=[ 186], 99.90th=[ 190], 99.95th=[ 190], 00:22:50.955 | 99.99th=[ 190] 00:22:50.955 bw ( KiB/s): min= 384, max= 840, per=3.66%, avg=656.55, stdev=120.07, samples=20 00:22:50.955 iops : min= 96, max= 210, avg=164.10, stdev=30.01, samples=20 00:22:50.955 lat (msec) : 50=2.11%, 100=52.77%, 250=45.11% 00:22:50.955 cpu : usr=42.66%, sys=1.37%, ctx=1325, majf=0, minf=9 00:22:50.955 IO depths : 1=3.7%, 2=7.9%, 4=19.6%, 8=59.9%, 16=8.9%, 32=0.0%, >=64=0.0% 00:22:50.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 complete : 0=0.0%, 4=92.3%, 8=2.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 issued rwts: total=1658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.955 filename1: (groupid=0, jobs=1): err= 0: pid=98143: Mon Jul 15 19:38:38 2024 00:22:50.955 read: IOPS=163, BW=654KiB/s (669kB/s)(6552KiB/10022msec) 00:22:50.955 slat (usec): min=3, max=8032, avg=19.22, stdev=198.33 00:22:50.955 clat (msec): min=43, max=232, avg=97.78, stdev=31.67 00:22:50.955 lat (msec): min=43, max=232, avg=97.80, stdev=31.67 00:22:50.955 clat percentiles (msec): 00:22:50.955 | 1.00th=[ 48], 5.00th=[ 61], 10.00th=[ 68], 20.00th=[ 72], 00:22:50.955 | 30.00th=[ 77], 40.00th=[ 84], 50.00th=[ 91], 60.00th=[ 103], 00:22:50.955 | 70.00th=[ 109], 80.00th=[ 121], 90.00th=[ 144], 95.00th=[ 157], 00:22:50.955 | 99.00th=[ 205], 99.50th=[ 205], 99.90th=[ 232], 99.95th=[ 234], 00:22:50.955 | 99.99th=[ 234] 00:22:50.955 bw ( KiB/s): min= 384, max= 896, per=3.61%, avg=648.80, stdev=149.28, samples=20 00:22:50.955 iops : min= 96, max= 224, avg=162.20, stdev=37.32, samples=20 00:22:50.955 lat (msec) : 50=2.81%, 100=56.29%, 250=40.90% 00:22:50.955 cpu : usr=37.54%, sys=1.00%, ctx=1099, majf=0, minf=9 00:22:50.955 IO depths : 1=2.7%, 2=5.8%, 4=16.2%, 8=65.3%, 16=10.0%, 32=0.0%, >=64=0.0% 00:22:50.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 complete : 0=0.0%, 4=91.2%, 8=3.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 issued rwts: total=1638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.955 filename1: (groupid=0, jobs=1): err= 0: pid=98144: Mon Jul 15 19:38:38 2024 00:22:50.955 read: IOPS=168, BW=672KiB/s (688kB/s)(6732KiB/10013msec) 00:22:50.955 slat (usec): min=3, max=8018, avg=16.20, stdev=195.23 00:22:50.955 clat (msec): min=35, max=181, avg=95.08, stdev=29.78 00:22:50.955 lat (msec): min=35, max=181, avg=95.10, stdev=29.78 00:22:50.955 clat percentiles (msec): 00:22:50.955 | 1.00th=[ 39], 5.00th=[ 51], 10.00th=[ 61], 20.00th=[ 72], 00:22:50.955 | 30.00th=[ 74], 40.00th=[ 85], 50.00th=[ 93], 60.00th=[ 97], 00:22:50.955 | 70.00th=[ 108], 80.00th=[ 118], 90.00th=[ 136], 95.00th=[ 155], 00:22:50.955 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 182], 99.95th=[ 182], 00:22:50.955 | 99.99th=[ 182] 00:22:50.955 bw ( KiB/s): min= 384, max= 896, per=3.76%, avg=674.95, stdev=137.20, samples=19 00:22:50.955 iops : min= 96, max= 224, avg=168.74, stdev=34.30, samples=19 00:22:50.955 lat (msec) : 50=4.34%, 100=58.47%, 250=37.20% 00:22:50.955 cpu : usr=32.15%, sys=1.02%, ctx=888, majf=0, minf=9 00:22:50.955 IO depths : 1=2.0%, 2=4.3%, 4=13.4%, 8=69.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:22:50.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 complete : 0=0.0%, 4=91.0%, 8=4.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 issued rwts: total=1683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.955 filename1: (groupid=0, jobs=1): err= 0: pid=98145: Mon Jul 15 19:38:38 2024 00:22:50.955 read: IOPS=183, BW=735KiB/s (753kB/s)(7376KiB/10031msec) 00:22:50.955 slat (usec): min=7, max=3240, avg=13.54, stdev=75.39 00:22:50.955 clat (msec): min=33, max=191, avg=86.92, stdev=28.20 00:22:50.955 lat (msec): min=33, max=191, avg=86.93, stdev=28.20 00:22:50.955 clat percentiles (msec): 00:22:50.955 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 52], 20.00th=[ 64], 00:22:50.955 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 82], 60.00th=[ 91], 00:22:50.955 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 125], 95.00th=[ 144], 00:22:50.955 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 192], 99.95th=[ 192], 00:22:50.955 | 99.99th=[ 192] 00:22:50.955 bw ( KiB/s): min= 512, max= 992, per=4.08%, avg=731.20, stdev=148.39, samples=20 00:22:50.955 iops : min= 128, max= 248, avg=182.80, stdev=37.10, samples=20 00:22:50.955 lat (msec) : 50=9.16%, 100=62.42%, 250=28.42% 00:22:50.955 cpu : usr=41.18%, sys=1.31%, ctx=1261, majf=0, minf=9 00:22:50.955 IO depths : 1=2.2%, 2=4.6%, 4=12.4%, 8=69.1%, 16=11.7%, 32=0.0%, >=64=0.0% 00:22:50.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 complete : 0=0.0%, 4=90.8%, 8=5.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 issued rwts: total=1844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.955 filename1: (groupid=0, jobs=1): err= 0: pid=98146: Mon Jul 15 19:38:38 2024 00:22:50.955 read: IOPS=200, BW=800KiB/s (819kB/s)(8032KiB/10040msec) 00:22:50.955 slat (usec): min=7, max=12067, avg=28.30, stdev=379.54 00:22:50.955 clat (msec): min=39, max=164, avg=79.71, stdev=25.48 00:22:50.955 lat (msec): min=39, max=164, avg=79.74, stdev=25.51 00:22:50.955 clat percentiles (msec): 00:22:50.955 | 1.00th=[ 43], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 56], 00:22:50.955 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 75], 60.00th=[ 81], 00:22:50.955 | 70.00th=[ 91], 80.00th=[ 104], 90.00th=[ 115], 95.00th=[ 126], 00:22:50.955 | 99.00th=[ 150], 99.50th=[ 159], 99.90th=[ 165], 99.95th=[ 165], 00:22:50.955 | 99.99th=[ 165] 00:22:50.955 bw ( KiB/s): min= 600, max= 1080, per=4.44%, avg=796.80, stdev=139.42, samples=20 00:22:50.955 iops : min= 150, max= 270, avg=199.20, stdev=34.86, samples=20 00:22:50.955 lat (msec) : 50=10.91%, 100=66.43%, 250=22.66% 00:22:50.955 cpu : usr=39.89%, sys=0.96%, ctx=1321, majf=0, minf=9 00:22:50.955 IO depths : 1=2.4%, 2=5.1%, 4=13.7%, 8=68.2%, 16=10.7%, 32=0.0%, >=64=0.0% 00:22:50.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 complete : 0=0.0%, 4=91.1%, 8=3.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 issued rwts: total=2008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.955 filename1: (groupid=0, jobs=1): err= 0: pid=98147: Mon Jul 15 19:38:38 2024 00:22:50.955 read: IOPS=197, BW=790KiB/s (809kB/s)(7936KiB/10045msec) 00:22:50.955 slat (usec): min=6, max=8024, avg=17.23, stdev=201.23 00:22:50.955 clat (msec): min=33, max=178, avg=80.80, stdev=26.49 00:22:50.955 lat (msec): min=33, max=178, avg=80.82, stdev=26.48 00:22:50.955 clat percentiles (msec): 00:22:50.955 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 58], 00:22:50.955 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 85], 00:22:50.955 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 117], 95.00th=[ 126], 00:22:50.955 | 99.00th=[ 155], 99.50th=[ 167], 99.90th=[ 180], 99.95th=[ 180], 00:22:50.955 | 99.99th=[ 180] 00:22:50.955 bw ( KiB/s): min= 512, max= 1024, per=4.39%, avg=787.20, stdev=130.18, samples=20 00:22:50.955 iops : min= 128, max= 256, avg=196.80, stdev=32.54, samples=20 00:22:50.955 lat (msec) : 50=14.67%, 100=62.55%, 250=22.78% 00:22:50.955 cpu : usr=34.89%, sys=1.03%, ctx=985, majf=0, minf=9 00:22:50.955 IO depths : 1=0.8%, 2=1.8%, 4=7.5%, 8=76.7%, 16=13.3%, 32=0.0%, >=64=0.0% 00:22:50.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 complete : 0=0.0%, 4=89.6%, 8=6.4%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 issued rwts: total=1984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.955 filename2: (groupid=0, jobs=1): err= 0: pid=98148: Mon Jul 15 19:38:38 2024 00:22:50.955 read: IOPS=194, BW=777KiB/s (796kB/s)(7800KiB/10040msec) 00:22:50.955 slat (usec): min=6, max=8040, avg=24.61, stdev=314.45 00:22:50.955 clat (msec): min=36, max=168, avg=82.19, stdev=23.98 00:22:50.955 lat (msec): min=36, max=168, avg=82.22, stdev=23.99 00:22:50.955 clat percentiles (msec): 00:22:50.955 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 61], 00:22:50.955 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 85], 00:22:50.955 | 70.00th=[ 95], 80.00th=[ 106], 90.00th=[ 115], 95.00th=[ 123], 00:22:50.955 | 99.00th=[ 148], 99.50th=[ 163], 99.90th=[ 169], 99.95th=[ 169], 00:22:50.955 | 99.99th=[ 169] 00:22:50.955 bw ( KiB/s): min= 512, max= 992, per=4.31%, avg=773.65, stdev=114.09, samples=20 00:22:50.955 iops : min= 128, max= 248, avg=193.40, stdev=28.54, samples=20 00:22:50.955 lat (msec) : 50=8.56%, 100=68.62%, 250=22.82% 00:22:50.955 cpu : usr=34.70%, sys=1.26%, ctx=1097, majf=0, minf=9 00:22:50.955 IO depths : 1=1.6%, 2=3.4%, 4=11.2%, 8=71.9%, 16=11.9%, 32=0.0%, >=64=0.0% 00:22:50.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 complete : 0=0.0%, 4=90.4%, 8=4.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 issued rwts: total=1950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.955 filename2: (groupid=0, jobs=1): err= 0: pid=98149: Mon Jul 15 19:38:38 2024 00:22:50.955 read: IOPS=205, BW=822KiB/s (841kB/s)(8252KiB/10045msec) 00:22:50.955 slat (usec): min=3, max=8018, avg=17.32, stdev=197.23 00:22:50.955 clat (msec): min=5, max=162, avg=77.81, stdev=25.69 00:22:50.955 lat (msec): min=5, max=162, avg=77.82, stdev=25.69 00:22:50.955 clat percentiles (msec): 00:22:50.955 | 1.00th=[ 11], 5.00th=[ 44], 10.00th=[ 50], 20.00th=[ 58], 00:22:50.955 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 81], 00:22:50.955 | 70.00th=[ 91], 80.00th=[ 97], 90.00th=[ 111], 95.00th=[ 126], 00:22:50.955 | 99.00th=[ 148], 99.50th=[ 148], 99.90th=[ 163], 99.95th=[ 163], 00:22:50.955 | 99.99th=[ 163] 00:22:50.955 bw ( KiB/s): min= 640, max= 1024, per=4.56%, avg=818.55, stdev=118.87, samples=20 00:22:50.955 iops : min= 160, max= 256, avg=204.60, stdev=29.74, samples=20 00:22:50.955 lat (msec) : 10=0.78%, 20=0.78%, 50=10.76%, 100=71.74%, 250=15.95% 00:22:50.955 cpu : usr=43.14%, sys=1.32%, ctx=1415, majf=0, minf=9 00:22:50.955 IO depths : 1=1.2%, 2=3.0%, 4=10.9%, 8=72.7%, 16=12.2%, 32=0.0%, >=64=0.0% 00:22:50.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 complete : 0=0.0%, 4=90.4%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 issued rwts: total=2063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.955 filename2: (groupid=0, jobs=1): err= 0: pid=98150: Mon Jul 15 19:38:38 2024 00:22:50.955 read: IOPS=200, BW=803KiB/s (822kB/s)(8048KiB/10026msec) 00:22:50.955 slat (usec): min=4, max=8052, avg=23.31, stdev=268.40 00:22:50.955 clat (msec): min=36, max=168, avg=79.54, stdev=28.27 00:22:50.955 lat (msec): min=36, max=168, avg=79.56, stdev=28.28 00:22:50.955 clat percentiles (msec): 00:22:50.955 | 1.00th=[ 40], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 53], 00:22:50.955 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 83], 00:22:50.955 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 132], 00:22:50.955 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 169], 99.95th=[ 169], 00:22:50.955 | 99.99th=[ 169] 00:22:50.955 bw ( KiB/s): min= 510, max= 1120, per=4.45%, avg=798.30, stdev=211.33, samples=20 00:22:50.955 iops : min= 127, max= 280, avg=199.55, stdev=52.87, samples=20 00:22:50.955 lat (msec) : 50=16.50%, 100=60.83%, 250=22.66% 00:22:50.955 cpu : usr=38.08%, sys=1.29%, ctx=1103, majf=0, minf=9 00:22:50.955 IO depths : 1=1.8%, 2=3.9%, 4=11.9%, 8=71.0%, 16=11.4%, 32=0.0%, >=64=0.0% 00:22:50.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 complete : 0=0.0%, 4=90.4%, 8=4.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.955 issued rwts: total=2012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.955 filename2: (groupid=0, jobs=1): err= 0: pid=98151: Mon Jul 15 19:38:38 2024 00:22:50.956 read: IOPS=196, BW=786KiB/s (805kB/s)(7908KiB/10057msec) 00:22:50.956 slat (nsec): min=3873, max=37637, avg=11023.61, stdev=4088.24 00:22:50.956 clat (msec): min=5, max=178, avg=81.10, stdev=30.49 00:22:50.956 lat (msec): min=5, max=178, avg=81.11, stdev=30.49 00:22:50.956 clat percentiles (msec): 00:22:50.956 | 1.00th=[ 6], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 58], 00:22:50.956 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:22:50.956 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 124], 95.00th=[ 142], 00:22:50.956 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 167], 99.95th=[ 180], 00:22:50.956 | 99.99th=[ 180] 00:22:50.956 bw ( KiB/s): min= 608, max= 1031, per=4.37%, avg=783.70, stdev=127.18, samples=20 00:22:50.956 iops : min= 152, max= 257, avg=195.85, stdev=31.68, samples=20 00:22:50.956 lat (msec) : 10=2.43%, 50=11.79%, 100=62.11%, 250=23.67% 00:22:50.956 cpu : usr=32.78%, sys=1.18%, ctx=921, majf=0, minf=9 00:22:50.956 IO depths : 1=1.3%, 2=3.2%, 4=12.2%, 8=71.7%, 16=11.5%, 32=0.0%, >=64=0.0% 00:22:50.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.956 complete : 0=0.0%, 4=90.4%, 8=4.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.956 issued rwts: total=1977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.956 filename2: (groupid=0, jobs=1): err= 0: pid=98152: Mon Jul 15 19:38:38 2024 00:22:50.956 read: IOPS=223, BW=895KiB/s (917kB/s)(8992KiB/10043msec) 00:22:50.956 slat (usec): min=6, max=4059, avg=13.12, stdev=85.59 00:22:50.956 clat (msec): min=29, max=176, avg=71.38, stdev=22.18 00:22:50.956 lat (msec): min=29, max=176, avg=71.39, stdev=22.18 00:22:50.956 clat percentiles (msec): 00:22:50.956 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 53], 00:22:50.956 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 74], 00:22:50.956 | 70.00th=[ 80], 80.00th=[ 86], 90.00th=[ 103], 95.00th=[ 114], 00:22:50.956 | 99.00th=[ 138], 99.50th=[ 161], 99.90th=[ 178], 99.95th=[ 178], 00:22:50.956 | 99.99th=[ 178] 00:22:50.956 bw ( KiB/s): min= 512, max= 1200, per=4.97%, avg=892.80, stdev=171.27, samples=20 00:22:50.956 iops : min= 128, max= 300, avg=223.20, stdev=42.82, samples=20 00:22:50.956 lat (msec) : 50=15.66%, 100=73.31%, 250=11.03% 00:22:50.956 cpu : usr=41.00%, sys=1.23%, ctx=1249, majf=0, minf=9 00:22:50.956 IO depths : 1=0.1%, 2=0.1%, 4=4.3%, 8=81.3%, 16=14.2%, 32=0.0%, >=64=0.0% 00:22:50.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.956 complete : 0=0.0%, 4=88.8%, 8=7.4%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.956 issued rwts: total=2248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.956 filename2: (groupid=0, jobs=1): err= 0: pid=98153: Mon Jul 15 19:38:38 2024 00:22:50.956 read: IOPS=162, BW=648KiB/s (664kB/s)(6488KiB/10008msec) 00:22:50.956 slat (usec): min=7, max=8041, avg=27.74, stdev=314.71 00:22:50.956 clat (msec): min=30, max=201, avg=98.53, stdev=30.60 00:22:50.956 lat (msec): min=30, max=201, avg=98.56, stdev=30.60 00:22:50.956 clat percentiles (msec): 00:22:50.956 | 1.00th=[ 45], 5.00th=[ 51], 10.00th=[ 69], 20.00th=[ 72], 00:22:50.956 | 30.00th=[ 81], 40.00th=[ 86], 50.00th=[ 95], 60.00th=[ 105], 00:22:50.956 | 70.00th=[ 115], 80.00th=[ 121], 90.00th=[ 144], 95.00th=[ 153], 00:22:50.956 | 99.00th=[ 174], 99.50th=[ 203], 99.90th=[ 203], 99.95th=[ 203], 00:22:50.956 | 99.99th=[ 203] 00:22:50.956 bw ( KiB/s): min= 384, max= 896, per=3.62%, avg=649.00, stdev=131.54, samples=19 00:22:50.956 iops : min= 96, max= 224, avg=162.21, stdev=32.93, samples=19 00:22:50.956 lat (msec) : 50=4.44%, 100=54.32%, 250=41.25% 00:22:50.956 cpu : usr=34.93%, sys=1.09%, ctx=970, majf=0, minf=9 00:22:50.956 IO depths : 1=3.4%, 2=7.2%, 4=17.9%, 8=62.3%, 16=9.2%, 32=0.0%, >=64=0.0% 00:22:50.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.956 complete : 0=0.0%, 4=91.9%, 8=2.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.956 issued rwts: total=1622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.956 filename2: (groupid=0, jobs=1): err= 0: pid=98154: Mon Jul 15 19:38:38 2024 00:22:50.956 read: IOPS=198, BW=794KiB/s (813kB/s)(7980KiB/10050msec) 00:22:50.956 slat (usec): min=5, max=4083, avg=19.05, stdev=128.65 00:22:50.956 clat (msec): min=6, max=179, avg=80.37, stdev=28.34 00:22:50.956 lat (msec): min=6, max=179, avg=80.39, stdev=28.35 00:22:50.956 clat percentiles (msec): 00:22:50.956 | 1.00th=[ 17], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 60], 00:22:50.956 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 81], 00:22:50.956 | 70.00th=[ 87], 80.00th=[ 107], 90.00th=[ 122], 95.00th=[ 134], 00:22:50.956 | 99.00th=[ 150], 99.50th=[ 171], 99.90th=[ 180], 99.95th=[ 180], 00:22:50.956 | 99.99th=[ 180] 00:22:50.956 bw ( KiB/s): min= 560, max= 1042, per=4.40%, avg=790.70, stdev=127.92, samples=20 00:22:50.956 iops : min= 140, max= 260, avg=197.60, stdev=31.87, samples=20 00:22:50.956 lat (msec) : 10=0.80%, 20=0.80%, 50=11.98%, 100=64.61%, 250=21.80% 00:22:50.956 cpu : usr=35.28%, sys=1.08%, ctx=1007, majf=0, minf=9 00:22:50.956 IO depths : 1=1.8%, 2=3.8%, 4=12.5%, 8=70.8%, 16=11.1%, 32=0.0%, >=64=0.0% 00:22:50.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.956 complete : 0=0.0%, 4=90.4%, 8=4.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.956 issued rwts: total=1995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.956 filename2: (groupid=0, jobs=1): err= 0: pid=98155: Mon Jul 15 19:38:38 2024 00:22:50.956 read: IOPS=198, BW=795KiB/s (814kB/s)(7980KiB/10042msec) 00:22:50.956 slat (usec): min=7, max=4022, avg=14.24, stdev=90.02 00:22:50.956 clat (msec): min=35, max=174, avg=80.40, stdev=25.51 00:22:50.956 lat (msec): min=35, max=174, avg=80.42, stdev=25.51 00:22:50.956 clat percentiles (msec): 00:22:50.956 | 1.00th=[ 39], 5.00th=[ 45], 10.00th=[ 50], 20.00th=[ 57], 00:22:50.956 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 83], 00:22:50.956 | 70.00th=[ 93], 80.00th=[ 103], 90.00th=[ 114], 95.00th=[ 129], 00:22:50.956 | 99.00th=[ 142], 99.50th=[ 155], 99.90th=[ 176], 99.95th=[ 176], 00:22:50.956 | 99.99th=[ 176] 00:22:50.956 bw ( KiB/s): min= 512, max= 1200, per=4.41%, avg=791.65, stdev=168.13, samples=20 00:22:50.956 iops : min= 128, max= 300, avg=197.90, stdev=42.05, samples=20 00:22:50.956 lat (msec) : 50=10.33%, 100=66.02%, 250=23.66% 00:22:50.956 cpu : usr=40.04%, sys=1.14%, ctx=1266, majf=0, minf=9 00:22:50.956 IO depths : 1=1.3%, 2=2.6%, 4=9.2%, 8=74.6%, 16=12.3%, 32=0.0%, >=64=0.0% 00:22:50.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.956 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.956 issued rwts: total=1995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:50.956 00:22:50.956 Run status group 0 (all jobs): 00:22:50.956 READ: bw=17.5MiB/s (18.4MB/s), 648KiB/s-895KiB/s (664kB/s-917kB/s), io=176MiB (185MB), run=10003-10057msec 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.956 bdev_null0 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.956 [2024-07-15 19:38:38.986878] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.956 bdev_null1 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.956 19:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.956 { 00:22:50.956 "params": { 00:22:50.956 "name": "Nvme$subsystem", 00:22:50.956 "trtype": "$TEST_TRANSPORT", 00:22:50.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.956 "adrfam": "ipv4", 00:22:50.956 "trsvcid": "$NVMF_PORT", 00:22:50.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.956 "hdgst": ${hdgst:-false}, 00:22:50.956 "ddgst": ${ddgst:-false} 00:22:50.956 }, 00:22:50.956 "method": "bdev_nvme_attach_controller" 00:22:50.956 } 00:22:50.956 EOF 00:22:50.956 )") 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.956 19:38:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.956 { 00:22:50.956 "params": { 00:22:50.956 "name": "Nvme$subsystem", 00:22:50.956 "trtype": "$TEST_TRANSPORT", 00:22:50.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.956 "adrfam": "ipv4", 00:22:50.956 "trsvcid": "$NVMF_PORT", 00:22:50.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.956 "hdgst": ${hdgst:-false}, 00:22:50.956 "ddgst": ${ddgst:-false} 00:22:50.956 }, 00:22:50.957 "method": "bdev_nvme_attach_controller" 00:22:50.957 } 00:22:50.957 EOF 00:22:50.957 )") 00:22:50.957 19:38:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:50.957 19:38:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:50.957 19:38:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:50.957 19:38:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:22:50.957 19:38:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:22:50.957 19:38:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:50.957 "params": { 00:22:50.957 "name": "Nvme0", 00:22:50.957 "trtype": "tcp", 00:22:50.957 "traddr": "10.0.0.2", 00:22:50.957 "adrfam": "ipv4", 00:22:50.957 "trsvcid": "4420", 00:22:50.957 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:50.957 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:50.957 "hdgst": false, 00:22:50.957 "ddgst": false 00:22:50.957 }, 00:22:50.957 "method": "bdev_nvme_attach_controller" 00:22:50.957 },{ 00:22:50.957 "params": { 00:22:50.957 "name": "Nvme1", 00:22:50.957 "trtype": "tcp", 00:22:50.957 "traddr": "10.0.0.2", 00:22:50.957 "adrfam": "ipv4", 00:22:50.957 "trsvcid": "4420", 00:22:50.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.957 "hdgst": false, 00:22:50.957 "ddgst": false 00:22:50.957 }, 00:22:50.957 "method": "bdev_nvme_attach_controller" 00:22:50.957 }' 00:22:50.957 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:50.957 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:50.957 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:50.957 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:50.957 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:50.957 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:50.957 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:50.957 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:50.957 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:50.957 19:38:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:50.957 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:50.957 ... 00:22:50.957 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:50.957 ... 00:22:50.957 fio-3.35 00:22:50.957 Starting 4 threads 00:22:55.139 00:22:55.139 filename0: (groupid=0, jobs=1): err= 0: pid=98286: Mon Jul 15 19:38:44 2024 00:22:55.139 read: IOPS=1845, BW=14.4MiB/s (15.1MB/s)(72.1MiB/5001msec) 00:22:55.139 slat (nsec): min=7623, max=66366, avg=13467.60, stdev=4935.22 00:22:55.139 clat (usec): min=1381, max=10196, avg=4275.05, stdev=493.18 00:22:55.139 lat (usec): min=1389, max=10213, avg=4288.52, stdev=493.12 00:22:55.139 clat percentiles (usec): 00:22:55.139 | 1.00th=[ 3818], 5.00th=[ 4015], 10.00th=[ 4047], 20.00th=[ 4080], 00:22:55.139 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4146], 00:22:55.139 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 5014], 95.00th=[ 5276], 00:22:55.139 | 99.00th=[ 6063], 99.50th=[ 6587], 99.90th=[ 9372], 99.95th=[10159], 00:22:55.139 | 99.99th=[10159] 00:22:55.139 bw ( KiB/s): min=11712, max=15360, per=24.93%, avg=14750.33, stdev=1162.73, samples=9 00:22:55.139 iops : min= 1464, max= 1920, avg=1843.78, stdev=145.34, samples=9 00:22:55.139 lat (msec) : 2=0.03%, 4=3.76%, 10=96.13%, 20=0.08% 00:22:55.139 cpu : usr=93.36%, sys=5.28%, ctx=5, majf=0, minf=0 00:22:55.139 IO depths : 1=8.9%, 2=18.5%, 4=56.5%, 8=16.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:55.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.139 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.139 issued rwts: total=9230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.139 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:55.139 filename0: (groupid=0, jobs=1): err= 0: pid=98287: Mon Jul 15 19:38:44 2024 00:22:55.139 read: IOPS=1849, BW=14.4MiB/s (15.1MB/s)(72.2MiB/5001msec) 00:22:55.139 slat (nsec): min=7848, max=40728, avg=13833.50, stdev=4351.27 00:22:55.139 clat (usec): min=2028, max=15231, avg=4255.02, stdev=493.86 00:22:55.139 lat (usec): min=2045, max=15246, avg=4268.86, stdev=494.00 00:22:55.139 clat percentiles (usec): 00:22:55.139 | 1.00th=[ 3720], 5.00th=[ 4015], 10.00th=[ 4047], 20.00th=[ 4080], 00:22:55.139 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4113], 60.00th=[ 4146], 00:22:55.139 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 5014], 95.00th=[ 5211], 00:22:55.139 | 99.00th=[ 5932], 99.50th=[ 6390], 99.90th=[ 9372], 99.95th=[10159], 00:22:55.139 | 99.99th=[15270] 00:22:55.139 bw ( KiB/s): min=11792, max=15360, per=24.96%, avg=14766.00, stdev=1138.21, samples=9 00:22:55.139 iops : min= 1474, max= 1920, avg=1845.67, stdev=142.25, samples=9 00:22:55.139 lat (msec) : 4=3.06%, 10=96.85%, 20=0.09% 00:22:55.139 cpu : usr=93.76%, sys=5.10%, ctx=6, majf=0, minf=9 00:22:55.139 IO depths : 1=11.4%, 2=25.0%, 4=50.0%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:55.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.139 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.139 issued rwts: total=9248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.139 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:55.139 filename1: (groupid=0, jobs=1): err= 0: pid=98288: Mon Jul 15 19:38:44 2024 00:22:55.139 read: IOPS=1847, BW=14.4MiB/s (15.1MB/s)(72.2MiB/5002msec) 00:22:55.139 slat (usec): min=7, max=111, avg=14.57, stdev= 4.34 00:22:55.139 clat (usec): min=2081, max=10214, avg=4266.83, stdev=494.24 00:22:55.139 lat (usec): min=2089, max=10230, avg=4281.40, stdev=494.28 00:22:55.139 clat percentiles (usec): 00:22:55.139 | 1.00th=[ 3523], 5.00th=[ 4015], 10.00th=[ 4047], 20.00th=[ 4080], 00:22:55.139 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4146], 00:22:55.139 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 5014], 95.00th=[ 5276], 00:22:55.139 | 99.00th=[ 5997], 99.50th=[ 6325], 99.90th=[ 9372], 99.95th=[10159], 00:22:55.139 | 99.99th=[10159] 00:22:55.139 bw ( KiB/s): min=11792, max=15360, per=24.94%, avg=14757.11, stdev=1136.68, samples=9 00:22:55.139 iops : min= 1474, max= 1920, avg=1844.56, stdev=142.05, samples=9 00:22:55.139 lat (msec) : 4=5.12%, 10=94.81%, 20=0.08% 00:22:55.139 cpu : usr=94.06%, sys=4.68%, ctx=11, majf=0, minf=9 00:22:55.139 IO depths : 1=7.1%, 2=15.6%, 4=59.3%, 8=18.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:55.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.139 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.139 issued rwts: total=9243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.139 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:55.139 filename1: (groupid=0, jobs=1): err= 0: pid=98289: Mon Jul 15 19:38:44 2024 00:22:55.139 read: IOPS=1854, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5003msec) 00:22:55.139 slat (nsec): min=3770, max=47550, avg=9318.65, stdev=2737.67 00:22:55.139 clat (usec): min=1315, max=9852, avg=4265.96, stdev=442.11 00:22:55.139 lat (usec): min=1329, max=9863, avg=4275.28, stdev=442.26 00:22:55.139 clat percentiles (usec): 00:22:55.139 | 1.00th=[ 3687], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4080], 00:22:55.139 | 30.00th=[ 4113], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4146], 00:22:55.139 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4948], 95.00th=[ 5276], 00:22:55.139 | 99.00th=[ 5800], 99.50th=[ 5932], 99.90th=[ 7767], 99.95th=[ 9241], 00:22:55.139 | 99.99th=[ 9896] 00:22:55.139 bw ( KiB/s): min=12160, max=15360, per=25.05%, avg=14819.56, stdev=1023.56, samples=9 00:22:55.139 iops : min= 1520, max= 1920, avg=1852.44, stdev=127.94, samples=9 00:22:55.139 lat (msec) : 2=0.09%, 4=2.11%, 10=97.80% 00:22:55.139 cpu : usr=93.48%, sys=5.42%, ctx=9, majf=0, minf=0 00:22:55.139 IO depths : 1=10.2%, 2=25.0%, 4=50.0%, 8=14.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:55.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.140 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.140 issued rwts: total=9280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.140 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:55.140 00:22:55.140 Run status group 0 (all jobs): 00:22:55.140 READ: bw=57.8MiB/s (60.6MB/s), 14.4MiB/s-14.5MiB/s (15.1MB/s-15.2MB/s), io=289MiB (303MB), run=5001-5003msec 00:22:55.398 19:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:22:55.398 19:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:55.398 19:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:55.398 19:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:55.398 19:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:55.398 19:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:55.398 19:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.398 19:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:55.398 19:38:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.398 19:38:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:55.398 19:38:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.398 19:38:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:55.398 19:38:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.398 19:38:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:55.398 19:38:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:55.398 19:38:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:55.398 19:38:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:55.398 19:38:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.399 19:38:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:55.399 19:38:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.399 19:38:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:55.399 19:38:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.399 19:38:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:55.399 19:38:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.399 ************************************ 00:22:55.399 END TEST fio_dif_rand_params 00:22:55.399 ************************************ 00:22:55.399 00:22:55.399 real 0m23.388s 00:22:55.399 user 2m5.782s 00:22:55.399 sys 0m5.542s 00:22:55.399 19:38:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:55.399 19:38:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:55.399 19:38:45 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:22:55.399 19:38:45 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:22:55.399 19:38:45 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:55.399 19:38:45 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:55.399 19:38:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:55.399 ************************************ 00:22:55.399 START TEST fio_dif_digest 00:22:55.399 ************************************ 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:55.399 bdev_null0 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:55.399 [2024-07-15 19:38:45.116049] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:55.399 { 00:22:55.399 "params": { 00:22:55.399 "name": "Nvme$subsystem", 00:22:55.399 "trtype": "$TEST_TRANSPORT", 00:22:55.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.399 "adrfam": "ipv4", 00:22:55.399 "trsvcid": "$NVMF_PORT", 00:22:55.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.399 "hdgst": ${hdgst:-false}, 00:22:55.399 "ddgst": ${ddgst:-false} 00:22:55.399 }, 00:22:55.399 "method": "bdev_nvme_attach_controller" 00:22:55.399 } 00:22:55.399 EOF 00:22:55.399 )") 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:55.399 "params": { 00:22:55.399 "name": "Nvme0", 00:22:55.399 "trtype": "tcp", 00:22:55.399 "traddr": "10.0.0.2", 00:22:55.399 "adrfam": "ipv4", 00:22:55.399 "trsvcid": "4420", 00:22:55.399 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:55.399 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:55.399 "hdgst": true, 00:22:55.399 "ddgst": true 00:22:55.399 }, 00:22:55.399 "method": "bdev_nvme_attach_controller" 00:22:55.399 }' 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:55.399 19:38:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:55.657 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:55.657 ... 00:22:55.657 fio-3.35 00:22:55.657 Starting 3 threads 00:23:07.858 00:23:07.858 filename0: (groupid=0, jobs=1): err= 0: pid=98391: Mon Jul 15 19:38:55 2024 00:23:07.858 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(292MiB/10009msec) 00:23:07.858 slat (usec): min=7, max=145, avg=15.43, stdev=10.33 00:23:07.858 clat (usec): min=9155, max=19827, avg=12833.99, stdev=838.41 00:23:07.858 lat (usec): min=9169, max=19839, avg=12849.42, stdev=841.87 00:23:07.858 clat percentiles (usec): 00:23:07.858 | 1.00th=[10945], 5.00th=[11600], 10.00th=[11863], 20.00th=[12256], 00:23:07.858 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:23:07.858 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13698], 95.00th=[14091], 00:23:07.858 | 99.00th=[15664], 99.50th=[16319], 99.90th=[18744], 99.95th=[19530], 00:23:07.858 | 99.99th=[19792] 00:23:07.858 bw ( KiB/s): min=27648, max=30720, per=38.65%, avg=29862.40, stdev=706.11, samples=20 00:23:07.858 iops : min= 216, max= 240, avg=233.40, stdev= 5.62, samples=20 00:23:07.858 lat (msec) : 10=0.04%, 20=99.96% 00:23:07.858 cpu : usr=92.38%, sys=6.13%, ctx=19, majf=0, minf=9 00:23:07.858 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:07.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.858 issued rwts: total=2336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.858 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:07.858 filename0: (groupid=0, jobs=1): err= 0: pid=98392: Mon Jul 15 19:38:55 2024 00:23:07.858 read: IOPS=206, BW=25.8MiB/s (27.0MB/s)(258MiB/10007msec) 00:23:07.859 slat (usec): min=3, max=132, avg=14.33, stdev= 9.67 00:23:07.859 clat (usec): min=11335, max=21552, avg=14516.24, stdev=1201.58 00:23:07.859 lat (usec): min=11348, max=21569, avg=14530.57, stdev=1205.56 00:23:07.859 clat percentiles (usec): 00:23:07.859 | 1.00th=[12387], 5.00th=[12911], 10.00th=[13173], 20.00th=[13566], 00:23:07.859 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14353], 60.00th=[14615], 00:23:07.859 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15664], 95.00th=[16188], 00:23:07.859 | 99.00th=[19530], 99.50th=[20055], 99.90th=[21365], 99.95th=[21365], 00:23:07.859 | 99.99th=[21627] 00:23:07.859 bw ( KiB/s): min=23040, max=27136, per=34.18%, avg=26409.05, stdev=869.84, samples=20 00:23:07.859 iops : min= 180, max= 212, avg=206.30, stdev= 6.78, samples=20 00:23:07.859 lat (msec) : 20=99.47%, 50=0.53% 00:23:07.859 cpu : usr=92.96%, sys=5.62%, ctx=19, majf=0, minf=0 00:23:07.859 IO depths : 1=3.1%, 2=96.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:07.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.859 issued rwts: total=2065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.859 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:07.859 filename0: (groupid=0, jobs=1): err= 0: pid=98393: Mon Jul 15 19:38:55 2024 00:23:07.859 read: IOPS=163, BW=20.5MiB/s (21.5MB/s)(205MiB/10007msec) 00:23:07.859 slat (usec): min=8, max=102, avg=16.00, stdev= 8.45 00:23:07.859 clat (usec): min=9713, max=24977, avg=18285.91, stdev=1104.75 00:23:07.859 lat (usec): min=9725, max=25000, avg=18301.90, stdev=1107.12 00:23:07.859 clat percentiles (usec): 00:23:07.859 | 1.00th=[15926], 5.00th=[16712], 10.00th=[17171], 20.00th=[17433], 00:23:07.859 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:23:07.859 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19530], 95.00th=[20055], 00:23:07.859 | 99.00th=[21890], 99.50th=[22414], 99.90th=[24249], 99.95th=[25035], 00:23:07.859 | 99.99th=[25035] 00:23:07.859 bw ( KiB/s): min=19968, max=21504, per=27.12%, avg=20955.65, stdev=485.38, samples=20 00:23:07.859 iops : min= 156, max= 168, avg=163.70, stdev= 3.80, samples=20 00:23:07.859 lat (msec) : 10=0.06%, 20=94.76%, 50=5.18% 00:23:07.859 cpu : usr=92.76%, sys=5.91%, ctx=10, majf=0, minf=9 00:23:07.859 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:07.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.859 issued rwts: total=1640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.859 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:07.859 00:23:07.859 Run status group 0 (all jobs): 00:23:07.859 READ: bw=75.4MiB/s (79.1MB/s), 20.5MiB/s-29.2MiB/s (21.5MB/s-30.6MB/s), io=755MiB (792MB), run=10007-10009msec 00:23:07.859 19:38:55 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:23:07.859 19:38:55 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:23:07.859 19:38:55 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:23:07.859 19:38:55 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:07.859 19:38:55 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:23:07.859 19:38:55 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:07.859 19:38:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.859 19:38:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:07.859 19:38:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.859 19:38:55 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:07.859 19:38:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.859 19:38:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:07.859 ************************************ 00:23:07.859 END TEST fio_dif_digest 00:23:07.859 ************************************ 00:23:07.859 19:38:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.859 00:23:07.859 real 0m10.902s 00:23:07.859 user 0m28.407s 00:23:07.859 sys 0m2.007s 00:23:07.859 19:38:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:07.859 19:38:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:07.859 19:38:56 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:07.859 19:38:56 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:07.859 19:38:56 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:23:07.859 19:38:56 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:07.859 19:38:56 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:23:07.859 19:38:56 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:07.859 19:38:56 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:23:07.859 19:38:56 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:07.859 19:38:56 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:07.859 rmmod nvme_tcp 00:23:07.859 rmmod nvme_fabrics 00:23:07.859 rmmod nvme_keyring 00:23:07.859 19:38:56 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:07.859 19:38:56 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:23:07.859 19:38:56 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:23:07.859 19:38:56 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 97645 ']' 00:23:07.859 19:38:56 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 97645 00:23:07.859 19:38:56 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 97645 ']' 00:23:07.859 19:38:56 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 97645 00:23:07.859 19:38:56 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:23:07.859 19:38:56 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.859 19:38:56 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97645 00:23:07.859 19:38:56 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:07.859 killing process with pid 97645 00:23:07.859 19:38:56 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:07.859 19:38:56 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97645' 00:23:07.859 19:38:56 nvmf_dif -- common/autotest_common.sh@967 -- # kill 97645 00:23:07.859 19:38:56 nvmf_dif -- common/autotest_common.sh@972 -- # wait 97645 00:23:07.859 19:38:56 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:23:07.859 19:38:56 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:07.859 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:07.859 Waiting for block devices as requested 00:23:07.859 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:07.859 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:07.859 19:38:56 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:07.859 19:38:56 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:07.859 19:38:56 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:07.859 19:38:56 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:07.859 19:38:56 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.859 19:38:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:07.859 19:38:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.859 19:38:56 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:07.859 00:23:07.859 real 0m59.255s 00:23:07.859 user 3m50.437s 00:23:07.859 sys 0m14.839s 00:23:07.859 19:38:56 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:07.859 19:38:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:07.859 ************************************ 00:23:07.859 END TEST nvmf_dif 00:23:07.859 ************************************ 00:23:07.859 19:38:56 -- common/autotest_common.sh@1142 -- # return 0 00:23:07.859 19:38:56 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:07.859 19:38:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:07.859 19:38:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:07.859 19:38:56 -- common/autotest_common.sh@10 -- # set +x 00:23:07.859 ************************************ 00:23:07.859 START TEST nvmf_abort_qd_sizes 00:23:07.859 ************************************ 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:07.859 * Looking for test storage... 00:23:07.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.859 19:38:56 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:07.860 19:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:07.860 Cannot find device "nvmf_tgt_br" 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:07.860 Cannot find device "nvmf_tgt_br2" 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:07.860 Cannot find device "nvmf_tgt_br" 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:07.860 Cannot find device "nvmf_tgt_br2" 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:07.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:07.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:07.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:23:07.860 00:23:07.860 --- 10.0.0.2 ping statistics --- 00:23:07.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.860 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:07.860 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:07.860 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:23:07.860 00:23:07.860 --- 10.0.0.3 ping statistics --- 00:23:07.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.860 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:07.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:23:07.860 00:23:07.860 --- 10.0.0.1 ping statistics --- 00:23:07.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.860 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:23:07.860 19:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:08.440 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:08.440 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:08.440 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:08.440 19:38:58 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.440 19:38:58 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:08.440 19:38:58 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:08.440 19:38:58 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.440 19:38:58 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:08.440 19:38:58 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:08.440 19:38:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:23:08.440 19:38:58 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:08.440 19:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:08.440 19:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:08.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.440 19:38:58 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=98979 00:23:08.440 19:38:58 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:23:08.440 19:38:58 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 98979 00:23:08.440 19:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 98979 ']' 00:23:08.440 19:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.440 19:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.440 19:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.440 19:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.440 19:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:08.698 [2024-07-15 19:38:58.281175] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:23:08.699 [2024-07-15 19:38:58.281293] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.699 [2024-07-15 19:38:58.428066] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:08.699 [2024-07-15 19:38:58.501371] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.699 [2024-07-15 19:38:58.501663] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.699 [2024-07-15 19:38:58.501856] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.699 [2024-07-15 19:38:58.502131] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.699 [2024-07-15 19:38:58.502282] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.957 [2024-07-15 19:38:58.502516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.957 [2024-07-15 19:38:58.502618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.957 [2024-07-15 19:38:58.502673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:08.957 [2024-07-15 19:38:58.502678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:08.957 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:23:08.958 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:23:08.958 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:08.958 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:08.958 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:08.958 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:23:08.958 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:23:08.958 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:08.958 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:08.958 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:23:08.958 19:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:08.958 19:38:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:23:08.958 19:38:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:23:08.958 19:38:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:23:08.958 19:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:08.958 19:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:08.958 19:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:08.958 ************************************ 00:23:08.958 START TEST spdk_target_abort 00:23:08.958 ************************************ 00:23:08.958 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:23:08.958 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:23:08.958 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:23:08.958 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.958 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:08.958 spdk_targetn1 00:23:08.958 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.958 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:08.958 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.958 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:08.958 [2024-07-15 19:38:58.758768] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:09.216 [2024-07-15 19:38:58.786927] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:09.216 19:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:12.497 Initializing NVMe Controllers 00:23:12.497 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:12.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:12.497 Initialization complete. Launching workers. 00:23:12.497 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10454, failed: 0 00:23:12.497 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1055, failed to submit 9399 00:23:12.497 success 708, unsuccess 347, failed 0 00:23:12.497 19:39:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:12.497 19:39:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:15.772 Initializing NVMe Controllers 00:23:15.772 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:15.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:15.772 Initialization complete. Launching workers. 00:23:15.772 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5897, failed: 0 00:23:15.772 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1244, failed to submit 4653 00:23:15.772 success 246, unsuccess 998, failed 0 00:23:15.772 19:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:15.772 19:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:19.051 Initializing NVMe Controllers 00:23:19.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:19.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:19.051 Initialization complete. Launching workers. 00:23:19.051 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28935, failed: 0 00:23:19.051 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2559, failed to submit 26376 00:23:19.051 success 373, unsuccess 2186, failed 0 00:23:19.051 19:39:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:19.051 19:39:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.051 19:39:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:19.051 19:39:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.051 19:39:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:19.051 19:39:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.051 19:39:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 98979 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 98979 ']' 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 98979 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98979 00:23:19.987 killing process with pid 98979 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98979' 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 98979 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 98979 00:23:19.987 ************************************ 00:23:19.987 END TEST spdk_target_abort 00:23:19.987 ************************************ 00:23:19.987 00:23:19.987 real 0m11.029s 00:23:19.987 user 0m41.870s 00:23:19.987 sys 0m1.751s 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:19.987 19:39:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:23:19.987 19:39:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:23:19.987 19:39:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:19.987 19:39:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:19.987 19:39:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:19.987 ************************************ 00:23:19.987 START TEST kernel_target_abort 00:23:19.987 ************************************ 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:19.987 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:20.245 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:20.245 19:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:20.530 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:20.530 Waiting for block devices as requested 00:23:20.531 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:20.531 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:20.789 No valid GPT data, bailing 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:20.789 No valid GPT data, bailing 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:20.789 No valid GPT data, bailing 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:20.789 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:20.790 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:23:20.790 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:20.790 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:21.048 No valid GPT data, bailing 00:23:21.048 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:21.048 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:21.048 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:21.048 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:23:21.048 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:23:21.048 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:21.048 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:21.048 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:21.048 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:21.048 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:23:21.048 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:23:21.048 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:23:21.048 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:21.048 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:23:21.048 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:23:21.048 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:23:21.048 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 --hostid=679b2b86-338b-4205-a8fd-6b6102ab1055 -a 10.0.0.1 -t tcp -s 4420 00:23:21.049 00:23:21.049 Discovery Log Number of Records 2, Generation counter 2 00:23:21.049 =====Discovery Log Entry 0====== 00:23:21.049 trtype: tcp 00:23:21.049 adrfam: ipv4 00:23:21.049 subtype: current discovery subsystem 00:23:21.049 treq: not specified, sq flow control disable supported 00:23:21.049 portid: 1 00:23:21.049 trsvcid: 4420 00:23:21.049 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:21.049 traddr: 10.0.0.1 00:23:21.049 eflags: none 00:23:21.049 sectype: none 00:23:21.049 =====Discovery Log Entry 1====== 00:23:21.049 trtype: tcp 00:23:21.049 adrfam: ipv4 00:23:21.049 subtype: nvme subsystem 00:23:21.049 treq: not specified, sq flow control disable supported 00:23:21.049 portid: 1 00:23:21.049 trsvcid: 4420 00:23:21.049 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:21.049 traddr: 10.0.0.1 00:23:21.049 eflags: none 00:23:21.049 sectype: none 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:21.049 19:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:24.338 Initializing NVMe Controllers 00:23:24.338 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:24.338 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:24.338 Initialization complete. Launching workers. 00:23:24.338 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34171, failed: 0 00:23:24.338 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34171, failed to submit 0 00:23:24.338 success 0, unsuccess 34171, failed 0 00:23:24.338 19:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:24.338 19:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:27.623 Initializing NVMe Controllers 00:23:27.623 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:27.623 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:27.623 Initialization complete. Launching workers. 00:23:27.623 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67985, failed: 0 00:23:27.623 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29632, failed to submit 38353 00:23:27.623 success 0, unsuccess 29632, failed 0 00:23:27.623 19:39:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:27.623 19:39:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:30.906 Initializing NVMe Controllers 00:23:30.906 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:30.906 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:30.906 Initialization complete. Launching workers. 00:23:30.906 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 77886, failed: 0 00:23:30.906 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19452, failed to submit 58434 00:23:30.906 success 0, unsuccess 19452, failed 0 00:23:30.906 19:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:23:30.906 19:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:30.906 19:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:23:30.906 19:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:30.906 19:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:30.906 19:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:30.906 19:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:30.906 19:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:30.906 19:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:30.906 19:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:31.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:33.057 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:33.057 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:33.313 00:23:33.313 real 0m13.136s 00:23:33.313 user 0m6.350s 00:23:33.313 sys 0m4.185s 00:23:33.313 19:39:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:33.313 ************************************ 00:23:33.313 END TEST kernel_target_abort 00:23:33.313 19:39:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:33.313 ************************************ 00:23:33.313 19:39:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:23:33.313 19:39:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:33.313 19:39:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:23:33.313 19:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:33.313 19:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:23:33.313 19:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:33.313 19:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:23:33.313 19:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:33.313 19:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:33.313 rmmod nvme_tcp 00:23:33.313 rmmod nvme_fabrics 00:23:33.313 rmmod nvme_keyring 00:23:33.313 19:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:33.313 19:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:23:33.313 19:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:23:33.313 19:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 98979 ']' 00:23:33.313 19:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 98979 00:23:33.313 19:39:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 98979 ']' 00:23:33.313 19:39:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 98979 00:23:33.313 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (98979) - No such process 00:23:33.313 Process with pid 98979 is not found 00:23:33.313 19:39:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 98979 is not found' 00:23:33.313 19:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:23:33.313 19:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:33.616 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:33.616 Waiting for block devices as requested 00:23:33.616 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:33.873 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:33.873 19:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:33.873 19:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:33.873 19:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.873 19:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:33.873 19:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.873 19:39:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:33.873 19:39:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.873 19:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:33.873 00:23:33.873 real 0m26.696s 00:23:33.873 user 0m49.221s 00:23:33.873 sys 0m7.234s 00:23:33.873 19:39:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:33.873 19:39:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:33.873 ************************************ 00:23:33.873 END TEST nvmf_abort_qd_sizes 00:23:33.873 ************************************ 00:23:33.873 19:39:23 -- common/autotest_common.sh@1142 -- # return 0 00:23:33.873 19:39:23 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:33.873 19:39:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:33.873 19:39:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:33.873 19:39:23 -- common/autotest_common.sh@10 -- # set +x 00:23:33.873 ************************************ 00:23:33.873 START TEST keyring_file 00:23:33.873 ************************************ 00:23:33.873 19:39:23 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:34.131 * Looking for test storage... 00:23:34.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:34.131 19:39:23 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:34.131 19:39:23 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.131 19:39:23 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.131 19:39:23 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.131 19:39:23 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.131 19:39:23 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.131 19:39:23 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.131 19:39:23 keyring_file -- paths/export.sh@5 -- # export PATH 00:23:34.131 19:39:23 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@47 -- # : 0 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:34.131 19:39:23 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:34.131 19:39:23 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:34.131 19:39:23 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:23:34.131 19:39:23 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:23:34.131 19:39:23 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:23:34.131 19:39:23 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.36PevDdcQL 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.36PevDdcQL 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.36PevDdcQL 00:23:34.131 19:39:23 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.36PevDdcQL 00:23:34.131 19:39:23 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@17 -- # name=key1 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qOY5spqBOr 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:34.131 19:39:23 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qOY5spqBOr 00:23:34.131 19:39:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qOY5spqBOr 00:23:34.131 19:39:23 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.qOY5spqBOr 00:23:34.131 19:39:23 keyring_file -- keyring/file.sh@30 -- # tgtpid=99841 00:23:34.132 19:39:23 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99841 00:23:34.132 19:39:23 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:34.132 19:39:23 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 99841 ']' 00:23:34.132 19:39:23 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.132 19:39:23 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:34.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.132 19:39:23 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.132 19:39:23 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:34.132 19:39:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:34.389 [2024-07-15 19:39:23.941609] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:23:34.389 [2024-07-15 19:39:23.941723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99841 ] 00:23:34.389 [2024-07-15 19:39:24.082684] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.389 [2024-07-15 19:39:24.152026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.322 19:39:24 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.322 19:39:24 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:23:35.322 19:39:24 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:23:35.322 19:39:24 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.322 19:39:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:35.322 [2024-07-15 19:39:25.001483] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.322 null0 00:23:35.322 [2024-07-15 19:39:25.033468] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:35.322 [2024-07-15 19:39:25.033677] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:35.322 [2024-07-15 19:39:25.041473] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.322 19:39:25 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:35.322 [2024-07-15 19:39:25.057511] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:23:35.322 2024/07/15 19:39:25 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:23:35.322 request: 00:23:35.322 { 00:23:35.322 "method": "nvmf_subsystem_add_listener", 00:23:35.322 "params": { 00:23:35.322 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:23:35.322 "secure_channel": false, 00:23:35.322 "listen_address": { 00:23:35.322 "trtype": "tcp", 00:23:35.322 "traddr": "127.0.0.1", 00:23:35.322 "trsvcid": "4420" 00:23:35.322 } 00:23:35.322 } 00:23:35.322 } 00:23:35.322 Got JSON-RPC error response 00:23:35.322 GoRPCClient: error on JSON-RPC call 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:35.322 19:39:25 keyring_file -- keyring/file.sh@46 -- # bperfpid=99876 00:23:35.322 19:39:25 keyring_file -- keyring/file.sh@48 -- # waitforlisten 99876 /var/tmp/bperf.sock 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 99876 ']' 00:23:35.322 19:39:25 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:35.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:35.322 19:39:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:35.322 [2024-07-15 19:39:25.122508] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:23:35.322 [2024-07-15 19:39:25.122611] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99876 ] 00:23:35.581 [2024-07-15 19:39:25.262478] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.581 [2024-07-15 19:39:25.332407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.840 19:39:25 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.840 19:39:25 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:23:35.840 19:39:25 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.36PevDdcQL 00:23:35.840 19:39:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.36PevDdcQL 00:23:36.098 19:39:25 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.qOY5spqBOr 00:23:36.098 19:39:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.qOY5spqBOr 00:23:36.430 19:39:25 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:23:36.430 19:39:25 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:23:36.430 19:39:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:36.430 19:39:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:36.430 19:39:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:36.723 19:39:26 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.36PevDdcQL == \/\t\m\p\/\t\m\p\.\3\6\P\e\v\D\d\c\Q\L ]] 00:23:36.723 19:39:26 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:23:36.723 19:39:26 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:23:36.723 19:39:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:36.723 19:39:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:36.723 19:39:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:36.982 19:39:26 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.qOY5spqBOr == \/\t\m\p\/\t\m\p\.\q\O\Y\5\s\p\q\B\O\r ]] 00:23:36.982 19:39:26 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:23:36.982 19:39:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:36.982 19:39:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:36.982 19:39:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:36.982 19:39:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:36.982 19:39:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:37.241 19:39:26 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:23:37.241 19:39:26 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:23:37.241 19:39:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:37.241 19:39:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:37.241 19:39:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:37.241 19:39:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:37.241 19:39:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:37.500 19:39:27 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:23:37.500 19:39:27 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:37.500 19:39:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:37.758 [2024-07-15 19:39:27.424632] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:37.758 nvme0n1 00:23:37.758 19:39:27 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:23:37.758 19:39:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:37.758 19:39:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:37.758 19:39:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:37.758 19:39:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:37.758 19:39:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:38.325 19:39:27 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:23:38.325 19:39:27 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:23:38.325 19:39:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:38.325 19:39:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:38.325 19:39:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:38.325 19:39:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:38.325 19:39:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:38.584 19:39:28 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:23:38.584 19:39:28 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:38.584 Running I/O for 1 seconds... 00:23:39.519 00:23:39.519 Latency(us) 00:23:39.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.519 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:23:39.519 nvme0n1 : 1.01 10239.06 40.00 0.00 0.00 12453.42 5987.61 23473.80 00:23:39.519 =================================================================================================================== 00:23:39.519 Total : 10239.06 40.00 0.00 0.00 12453.42 5987.61 23473.80 00:23:39.519 0 00:23:39.519 19:39:29 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:39.519 19:39:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:40.087 19:39:29 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:23:40.087 19:39:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:40.087 19:39:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:40.087 19:39:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:40.087 19:39:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:40.087 19:39:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:40.345 19:39:29 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:23:40.345 19:39:29 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:23:40.345 19:39:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:40.345 19:39:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:40.345 19:39:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:40.345 19:39:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:40.345 19:39:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:40.604 19:39:30 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:23:40.604 19:39:30 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:40.604 19:39:30 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:40.604 19:39:30 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:40.604 19:39:30 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:23:40.604 19:39:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:40.604 19:39:30 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:23:40.604 19:39:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:40.604 19:39:30 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:40.604 19:39:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:40.863 [2024-07-15 19:39:30.488752] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:40.863 [2024-07-15 19:39:30.489721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd270e0 (107): Transport endpoint is not connected 00:23:40.863 [2024-07-15 19:39:30.490705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd270e0 (9): Bad file descriptor 00:23:40.863 [2024-07-15 19:39:30.491702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:40.863 [2024-07-15 19:39:30.491722] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:40.863 [2024-07-15 19:39:30.491732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:40.863 2024/07/15 19:39:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:40.863 request: 00:23:40.863 { 00:23:40.863 "method": "bdev_nvme_attach_controller", 00:23:40.863 "params": { 00:23:40.863 "name": "nvme0", 00:23:40.863 "trtype": "tcp", 00:23:40.863 "traddr": "127.0.0.1", 00:23:40.863 "adrfam": "ipv4", 00:23:40.863 "trsvcid": "4420", 00:23:40.863 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:40.863 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:40.863 "prchk_reftag": false, 00:23:40.863 "prchk_guard": false, 00:23:40.863 "hdgst": false, 00:23:40.863 "ddgst": false, 00:23:40.863 "psk": "key1" 00:23:40.863 } 00:23:40.863 } 00:23:40.863 Got JSON-RPC error response 00:23:40.863 GoRPCClient: error on JSON-RPC call 00:23:40.863 19:39:30 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:40.863 19:39:30 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:40.863 19:39:30 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:40.863 19:39:30 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:40.863 19:39:30 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:23:40.863 19:39:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:40.863 19:39:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:40.863 19:39:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:40.863 19:39:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:40.863 19:39:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:41.123 19:39:30 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:23:41.123 19:39:30 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:23:41.123 19:39:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:41.123 19:39:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:41.123 19:39:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:41.123 19:39:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:41.123 19:39:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:41.690 19:39:31 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:23:41.690 19:39:31 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:23:41.690 19:39:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:41.690 19:39:31 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:23:41.690 19:39:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:23:42.257 19:39:31 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:23:42.257 19:39:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:42.257 19:39:31 keyring_file -- keyring/file.sh@77 -- # jq length 00:23:42.257 19:39:32 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:23:42.257 19:39:32 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.36PevDdcQL 00:23:42.257 19:39:32 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.36PevDdcQL 00:23:42.257 19:39:32 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:42.257 19:39:32 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.36PevDdcQL 00:23:42.257 19:39:32 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:23:42.257 19:39:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.257 19:39:32 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:23:42.516 19:39:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.516 19:39:32 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.36PevDdcQL 00:23:42.516 19:39:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.36PevDdcQL 00:23:42.774 [2024-07-15 19:39:32.322868] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.36PevDdcQL': 0100660 00:23:42.774 [2024-07-15 19:39:32.322918] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:42.774 2024/07/15 19:39:32 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.36PevDdcQL], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:23:42.774 request: 00:23:42.774 { 00:23:42.774 "method": "keyring_file_add_key", 00:23:42.774 "params": { 00:23:42.774 "name": "key0", 00:23:42.774 "path": "/tmp/tmp.36PevDdcQL" 00:23:42.774 } 00:23:42.774 } 00:23:42.774 Got JSON-RPC error response 00:23:42.774 GoRPCClient: error on JSON-RPC call 00:23:42.774 19:39:32 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:42.774 19:39:32 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:42.774 19:39:32 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:42.774 19:39:32 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:42.774 19:39:32 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.36PevDdcQL 00:23:42.774 19:39:32 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.36PevDdcQL 00:23:42.774 19:39:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.36PevDdcQL 00:23:43.033 19:39:32 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.36PevDdcQL 00:23:43.033 19:39:32 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:23:43.033 19:39:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:43.033 19:39:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:43.033 19:39:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:43.033 19:39:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:43.033 19:39:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:43.291 19:39:32 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:23:43.291 19:39:32 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:43.291 19:39:32 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:43.291 19:39:32 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:43.291 19:39:32 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:23:43.291 19:39:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.291 19:39:32 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:23:43.291 19:39:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.291 19:39:32 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:43.291 19:39:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:43.565 [2024-07-15 19:39:33.199044] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.36PevDdcQL': No such file or directory 00:23:43.565 [2024-07-15 19:39:33.199091] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:23:43.565 [2024-07-15 19:39:33.199118] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:23:43.565 [2024-07-15 19:39:33.199127] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:43.565 [2024-07-15 19:39:33.199136] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:23:43.566 2024/07/15 19:39:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:23:43.566 request: 00:23:43.566 { 00:23:43.566 "method": "bdev_nvme_attach_controller", 00:23:43.566 "params": { 00:23:43.566 "name": "nvme0", 00:23:43.566 "trtype": "tcp", 00:23:43.566 "traddr": "127.0.0.1", 00:23:43.566 "adrfam": "ipv4", 00:23:43.566 "trsvcid": "4420", 00:23:43.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:43.566 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:43.566 "prchk_reftag": false, 00:23:43.566 "prchk_guard": false, 00:23:43.566 "hdgst": false, 00:23:43.566 "ddgst": false, 00:23:43.566 "psk": "key0" 00:23:43.566 } 00:23:43.566 } 00:23:43.566 Got JSON-RPC error response 00:23:43.566 GoRPCClient: error on JSON-RPC call 00:23:43.566 19:39:33 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:43.566 19:39:33 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:43.566 19:39:33 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:43.566 19:39:33 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:43.566 19:39:33 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:23:43.566 19:39:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:43.853 19:39:33 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:43.853 19:39:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:43.853 19:39:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:43.853 19:39:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:43.853 19:39:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:43.853 19:39:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:43.853 19:39:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SZIC1tp7tD 00:23:43.853 19:39:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:43.853 19:39:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:43.853 19:39:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:43.853 19:39:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:43.853 19:39:33 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:43.853 19:39:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:43.853 19:39:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:43.853 19:39:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SZIC1tp7tD 00:23:43.853 19:39:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SZIC1tp7tD 00:23:43.853 19:39:33 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.SZIC1tp7tD 00:23:43.853 19:39:33 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SZIC1tp7tD 00:23:43.853 19:39:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SZIC1tp7tD 00:23:44.110 19:39:33 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:44.110 19:39:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:44.367 nvme0n1 00:23:44.367 19:39:34 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:23:44.367 19:39:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:44.367 19:39:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:44.367 19:39:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:44.367 19:39:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:44.367 19:39:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:44.625 19:39:34 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:23:44.625 19:39:34 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:23:44.625 19:39:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:44.889 19:39:34 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:23:44.889 19:39:34 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:23:44.889 19:39:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:44.889 19:39:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:44.889 19:39:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:45.149 19:39:34 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:23:45.149 19:39:34 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:23:45.149 19:39:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:45.149 19:39:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:45.149 19:39:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:45.149 19:39:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:45.149 19:39:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:45.406 19:39:35 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:23:45.406 19:39:35 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:45.406 19:39:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:45.664 19:39:35 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:23:45.664 19:39:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:45.664 19:39:35 keyring_file -- keyring/file.sh@104 -- # jq length 00:23:45.922 19:39:35 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:23:45.922 19:39:35 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SZIC1tp7tD 00:23:45.922 19:39:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SZIC1tp7tD 00:23:46.180 19:39:35 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.qOY5spqBOr 00:23:46.180 19:39:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.qOY5spqBOr 00:23:46.438 19:39:36 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:46.438 19:39:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:46.696 nvme0n1 00:23:46.696 19:39:36 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:23:46.696 19:39:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:23:47.263 19:39:36 keyring_file -- keyring/file.sh@112 -- # config='{ 00:23:47.263 "subsystems": [ 00:23:47.263 { 00:23:47.263 "subsystem": "keyring", 00:23:47.263 "config": [ 00:23:47.263 { 00:23:47.263 "method": "keyring_file_add_key", 00:23:47.263 "params": { 00:23:47.263 "name": "key0", 00:23:47.263 "path": "/tmp/tmp.SZIC1tp7tD" 00:23:47.263 } 00:23:47.263 }, 00:23:47.263 { 00:23:47.263 "method": "keyring_file_add_key", 00:23:47.263 "params": { 00:23:47.263 "name": "key1", 00:23:47.263 "path": "/tmp/tmp.qOY5spqBOr" 00:23:47.263 } 00:23:47.263 } 00:23:47.263 ] 00:23:47.263 }, 00:23:47.263 { 00:23:47.263 "subsystem": "iobuf", 00:23:47.263 "config": [ 00:23:47.263 { 00:23:47.263 "method": "iobuf_set_options", 00:23:47.263 "params": { 00:23:47.263 "large_bufsize": 135168, 00:23:47.263 "large_pool_count": 1024, 00:23:47.263 "small_bufsize": 8192, 00:23:47.263 "small_pool_count": 8192 00:23:47.263 } 00:23:47.263 } 00:23:47.263 ] 00:23:47.263 }, 00:23:47.263 { 00:23:47.263 "subsystem": "sock", 00:23:47.263 "config": [ 00:23:47.263 { 00:23:47.263 "method": "sock_set_default_impl", 00:23:47.263 "params": { 00:23:47.263 "impl_name": "posix" 00:23:47.263 } 00:23:47.263 }, 00:23:47.263 { 00:23:47.263 "method": "sock_impl_set_options", 00:23:47.263 "params": { 00:23:47.263 "enable_ktls": false, 00:23:47.263 "enable_placement_id": 0, 00:23:47.263 "enable_quickack": false, 00:23:47.263 "enable_recv_pipe": true, 00:23:47.263 "enable_zerocopy_send_client": false, 00:23:47.263 "enable_zerocopy_send_server": true, 00:23:47.263 "impl_name": "ssl", 00:23:47.263 "recv_buf_size": 4096, 00:23:47.263 "send_buf_size": 4096, 00:23:47.263 "tls_version": 0, 00:23:47.263 "zerocopy_threshold": 0 00:23:47.263 } 00:23:47.263 }, 00:23:47.263 { 00:23:47.263 "method": "sock_impl_set_options", 00:23:47.263 "params": { 00:23:47.263 "enable_ktls": false, 00:23:47.263 "enable_placement_id": 0, 00:23:47.263 "enable_quickack": false, 00:23:47.263 "enable_recv_pipe": true, 00:23:47.263 "enable_zerocopy_send_client": false, 00:23:47.263 "enable_zerocopy_send_server": true, 00:23:47.263 "impl_name": "posix", 00:23:47.263 "recv_buf_size": 2097152, 00:23:47.263 "send_buf_size": 2097152, 00:23:47.263 "tls_version": 0, 00:23:47.263 "zerocopy_threshold": 0 00:23:47.263 } 00:23:47.263 } 00:23:47.263 ] 00:23:47.263 }, 00:23:47.263 { 00:23:47.263 "subsystem": "vmd", 00:23:47.263 "config": [] 00:23:47.263 }, 00:23:47.263 { 00:23:47.263 "subsystem": "accel", 00:23:47.263 "config": [ 00:23:47.263 { 00:23:47.263 "method": "accel_set_options", 00:23:47.263 "params": { 00:23:47.263 "buf_count": 2048, 00:23:47.263 "large_cache_size": 16, 00:23:47.263 "sequence_count": 2048, 00:23:47.263 "small_cache_size": 128, 00:23:47.263 "task_count": 2048 00:23:47.263 } 00:23:47.263 } 00:23:47.263 ] 00:23:47.263 }, 00:23:47.263 { 00:23:47.263 "subsystem": "bdev", 00:23:47.263 "config": [ 00:23:47.263 { 00:23:47.263 "method": "bdev_set_options", 00:23:47.263 "params": { 00:23:47.263 "bdev_auto_examine": true, 00:23:47.263 "bdev_io_cache_size": 256, 00:23:47.263 "bdev_io_pool_size": 65535, 00:23:47.263 "iobuf_large_cache_size": 16, 00:23:47.263 "iobuf_small_cache_size": 128 00:23:47.263 } 00:23:47.263 }, 00:23:47.263 { 00:23:47.263 "method": "bdev_raid_set_options", 00:23:47.263 "params": { 00:23:47.263 "process_window_size_kb": 1024 00:23:47.263 } 00:23:47.263 }, 00:23:47.263 { 00:23:47.263 "method": "bdev_iscsi_set_options", 00:23:47.263 "params": { 00:23:47.263 "timeout_sec": 30 00:23:47.263 } 00:23:47.263 }, 00:23:47.263 { 00:23:47.263 "method": "bdev_nvme_set_options", 00:23:47.263 "params": { 00:23:47.263 "action_on_timeout": "none", 00:23:47.263 "allow_accel_sequence": false, 00:23:47.263 "arbitration_burst": 0, 00:23:47.263 "bdev_retry_count": 3, 00:23:47.263 "ctrlr_loss_timeout_sec": 0, 00:23:47.263 "delay_cmd_submit": true, 00:23:47.263 "dhchap_dhgroups": [ 00:23:47.263 "null", 00:23:47.263 "ffdhe2048", 00:23:47.263 "ffdhe3072", 00:23:47.263 "ffdhe4096", 00:23:47.263 "ffdhe6144", 00:23:47.263 "ffdhe8192" 00:23:47.263 ], 00:23:47.263 "dhchap_digests": [ 00:23:47.263 "sha256", 00:23:47.263 "sha384", 00:23:47.263 "sha512" 00:23:47.263 ], 00:23:47.263 "disable_auto_failback": false, 00:23:47.263 "fast_io_fail_timeout_sec": 0, 00:23:47.263 "generate_uuids": false, 00:23:47.263 "high_priority_weight": 0, 00:23:47.263 "io_path_stat": false, 00:23:47.263 "io_queue_requests": 512, 00:23:47.263 "keep_alive_timeout_ms": 10000, 00:23:47.263 "low_priority_weight": 0, 00:23:47.263 "medium_priority_weight": 0, 00:23:47.263 "nvme_adminq_poll_period_us": 10000, 00:23:47.263 "nvme_error_stat": false, 00:23:47.263 "nvme_ioq_poll_period_us": 0, 00:23:47.263 "rdma_cm_event_timeout_ms": 0, 00:23:47.263 "rdma_max_cq_size": 0, 00:23:47.263 "rdma_srq_size": 0, 00:23:47.263 "reconnect_delay_sec": 0, 00:23:47.263 "timeout_admin_us": 0, 00:23:47.263 "timeout_us": 0, 00:23:47.263 "transport_ack_timeout": 0, 00:23:47.263 "transport_retry_count": 4, 00:23:47.263 "transport_tos": 0 00:23:47.263 } 00:23:47.263 }, 00:23:47.263 { 00:23:47.263 "method": "bdev_nvme_attach_controller", 00:23:47.263 "params": { 00:23:47.263 "adrfam": "IPv4", 00:23:47.263 "ctrlr_loss_timeout_sec": 0, 00:23:47.263 "ddgst": false, 00:23:47.263 "fast_io_fail_timeout_sec": 0, 00:23:47.263 "hdgst": false, 00:23:47.263 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:47.263 "name": "nvme0", 00:23:47.263 "prchk_guard": false, 00:23:47.263 "prchk_reftag": false, 00:23:47.263 "psk": "key0", 00:23:47.263 "reconnect_delay_sec": 0, 00:23:47.263 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:47.263 "traddr": "127.0.0.1", 00:23:47.263 "trsvcid": "4420", 00:23:47.263 "trtype": "TCP" 00:23:47.263 } 00:23:47.263 }, 00:23:47.263 { 00:23:47.263 "method": "bdev_nvme_set_hotplug", 00:23:47.263 "params": { 00:23:47.263 "enable": false, 00:23:47.263 "period_us": 100000 00:23:47.263 } 00:23:47.263 }, 00:23:47.263 { 00:23:47.263 "method": "bdev_wait_for_examine" 00:23:47.263 } 00:23:47.263 ] 00:23:47.263 }, 00:23:47.263 { 00:23:47.263 "subsystem": "nbd", 00:23:47.263 "config": [] 00:23:47.263 } 00:23:47.263 ] 00:23:47.264 }' 00:23:47.264 19:39:36 keyring_file -- keyring/file.sh@114 -- # killprocess 99876 00:23:47.264 19:39:36 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 99876 ']' 00:23:47.264 19:39:36 keyring_file -- common/autotest_common.sh@952 -- # kill -0 99876 00:23:47.264 19:39:36 keyring_file -- common/autotest_common.sh@953 -- # uname 00:23:47.264 19:39:36 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:47.264 19:39:36 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99876 00:23:47.264 killing process with pid 99876 00:23:47.264 Received shutdown signal, test time was about 1.000000 seconds 00:23:47.264 00:23:47.264 Latency(us) 00:23:47.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.264 =================================================================================================================== 00:23:47.264 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:47.264 19:39:36 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:47.264 19:39:36 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:47.264 19:39:36 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99876' 00:23:47.264 19:39:36 keyring_file -- common/autotest_common.sh@967 -- # kill 99876 00:23:47.264 19:39:36 keyring_file -- common/autotest_common.sh@972 -- # wait 99876 00:23:47.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:47.264 19:39:36 keyring_file -- keyring/file.sh@117 -- # bperfpid=100334 00:23:47.264 19:39:36 keyring_file -- keyring/file.sh@119 -- # waitforlisten 100334 /var/tmp/bperf.sock 00:23:47.264 19:39:36 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100334 ']' 00:23:47.264 19:39:36 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:47.264 19:39:36 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:47.264 19:39:36 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:23:47.264 "subsystems": [ 00:23:47.264 { 00:23:47.264 "subsystem": "keyring", 00:23:47.264 "config": [ 00:23:47.264 { 00:23:47.264 "method": "keyring_file_add_key", 00:23:47.264 "params": { 00:23:47.264 "name": "key0", 00:23:47.264 "path": "/tmp/tmp.SZIC1tp7tD" 00:23:47.264 } 00:23:47.264 }, 00:23:47.264 { 00:23:47.264 "method": "keyring_file_add_key", 00:23:47.264 "params": { 00:23:47.264 "name": "key1", 00:23:47.264 "path": "/tmp/tmp.qOY5spqBOr" 00:23:47.264 } 00:23:47.264 } 00:23:47.264 ] 00:23:47.264 }, 00:23:47.264 { 00:23:47.264 "subsystem": "iobuf", 00:23:47.264 "config": [ 00:23:47.264 { 00:23:47.264 "method": "iobuf_set_options", 00:23:47.264 "params": { 00:23:47.264 "large_bufsize": 135168, 00:23:47.264 "large_pool_count": 1024, 00:23:47.264 "small_bufsize": 8192, 00:23:47.264 "small_pool_count": 8192 00:23:47.264 } 00:23:47.264 } 00:23:47.264 ] 00:23:47.264 }, 00:23:47.264 { 00:23:47.264 "subsystem": "sock", 00:23:47.264 "config": [ 00:23:47.264 { 00:23:47.264 "method": "sock_set_default_impl", 00:23:47.264 "params": { 00:23:47.264 "impl_name": "posix" 00:23:47.264 } 00:23:47.264 }, 00:23:47.264 { 00:23:47.264 "method": "sock_impl_set_options", 00:23:47.264 "params": { 00:23:47.264 "enable_ktls": false, 00:23:47.264 "enable_placement_id": 0, 00:23:47.264 "enable_quickack": false, 00:23:47.264 "enable_recv_pipe": true, 00:23:47.264 "enable_zerocopy_send_client": false, 00:23:47.264 "enable_zerocopy_send_server": true, 00:23:47.264 "impl_name": "ssl", 00:23:47.264 "recv_buf_size": 4096, 00:23:47.264 "send_buf_size": 4096, 00:23:47.264 "tls_version": 0, 00:23:47.264 "zerocopy_threshold": 0 00:23:47.264 } 00:23:47.264 }, 00:23:47.264 { 00:23:47.264 "method": "sock_impl_set_options", 00:23:47.264 "params": { 00:23:47.264 "enable_ktls": false, 00:23:47.264 "enable_placement_id": 0, 00:23:47.264 "enable_quickack": false, 00:23:47.264 "enable_recv_pipe": true, 00:23:47.264 "enable_zerocopy_send_client": false, 00:23:47.264 "enable_zerocopy_send_server": true, 00:23:47.264 "impl_name": "posix", 00:23:47.264 "recv_buf_size": 2097152, 00:23:47.264 "send_buf_size": 2097152, 00:23:47.264 "tls_version": 0, 00:23:47.264 "zerocopy_threshold": 0 00:23:47.264 } 00:23:47.264 } 00:23:47.264 ] 00:23:47.264 }, 00:23:47.264 { 00:23:47.264 "subsystem": "vmd", 00:23:47.264 "config": [] 00:23:47.264 }, 00:23:47.264 { 00:23:47.264 "subsystem": "accel", 00:23:47.264 "config": [ 00:23:47.264 { 00:23:47.264 "method": "accel_set_options", 00:23:47.264 "params": { 00:23:47.264 "buf_count": 2048, 00:23:47.264 "large_cache_size": 16, 00:23:47.264 "sequence_count": 2048, 00:23:47.264 "small_cache_size": 128, 00:23:47.264 "task_count": 2048 00:23:47.264 } 00:23:47.264 } 00:23:47.264 ] 00:23:47.264 }, 00:23:47.264 { 00:23:47.264 "subsystem": "bdev", 00:23:47.264 "config": [ 00:23:47.264 { 00:23:47.264 "method": "bdev_set_options", 00:23:47.264 "params": { 00:23:47.264 "bdev_auto_examine": true, 00:23:47.264 "bdev_io_cache_size": 256, 00:23:47.264 "bdev_io_pool_size": 65535, 00:23:47.264 "iobuf_large_cache_size": 16, 00:23:47.264 "iobuf_small_cache_size": 128 00:23:47.264 } 00:23:47.264 }, 00:23:47.264 { 00:23:47.264 "method": "bdev_raid_set_options", 00:23:47.264 "params": { 00:23:47.264 "process_window_size_kb": 1024 00:23:47.264 } 00:23:47.264 }, 00:23:47.264 { 00:23:47.264 "method": "bdev_iscsi_set_options", 00:23:47.264 "params": { 00:23:47.264 "timeout_sec": 30 00:23:47.264 } 00:23:47.264 }, 00:23:47.264 { 00:23:47.264 "method": "bdev_nvme_set_options", 00:23:47.264 "params": { 00:23:47.264 "action_on_timeout": "none", 00:23:47.264 "allow_accel_sequence": false, 00:23:47.264 "arbitration_burst": 0, 00:23:47.264 "bdev_retry_count": 3, 00:23:47.264 "ctrlr_loss_timeout_sec": 0, 00:23:47.264 "delay_cmd_submit": true, 00:23:47.264 "dhchap_dhgroups": [ 00:23:47.264 "null", 00:23:47.264 "ffdhe2048", 00:23:47.264 "ffdhe3072", 00:23:47.264 "ffdhe4096", 00:23:47.264 "ffdhe6144", 00:23:47.264 "ffdhe8192" 00:23:47.264 ], 00:23:47.264 "dhchap_digests": [ 00:23:47.264 "sha256", 00:23:47.264 "sha384", 00:23:47.264 "sha512" 00:23:47.264 ], 00:23:47.264 "disable_auto_failback": false, 00:23:47.264 "fast_io_fail_timeout_sec": 0, 00:23:47.264 "generate_uuids": false, 00:23:47.264 "high_priority_weight": 0, 00:23:47.264 "io_path_stat": false, 00:23:47.264 "io_queue_requests": 512, 00:23:47.264 "keep_alive_timeout_ms": 10000, 00:23:47.264 "low_priority_weight": 0, 00:23:47.264 "medium_priority_weight": 0, 00:23:47.264 "nvme_adminq_poll_period_us": 10000, 00:23:47.264 "nvme_error_stat": false, 00:23:47.264 "nvme_ioq_poll_period_us": 0, 00:23:47.264 "rdma_cm_event_timeout_ms": 0, 00:23:47.264 "rdma_max_cq_size": 0, 00:23:47.264 "rdma_srq_size": 0, 00:23:47.264 "reconnect_delay_sec": 0, 00:23:47.264 "timeout_admin_us": 0, 00:23:47.264 "timeout_us": 0, 00:23:47.264 "transport_ack_timeout": 0, 00:23:47.264 "transport_retry_count": 4, 00:23:47.264 "transport_tos": 0 00:23:47.264 } 00:23:47.264 }, 00:23:47.264 { 00:23:47.264 "method": "bdev_nvme_attach_controller", 00:23:47.264 "params": { 00:23:47.264 "adrfam": "IPv4", 00:23:47.264 "ctrlr_loss_timeout_sec": 0, 00:23:47.264 "ddgst": false, 00:23:47.264 "fast_io_fail_timeout_sec": 0, 00:23:47.264 "hdgst": false, 00:23:47.264 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:47.264 "name": "nvme0", 00:23:47.264 "prchk_guard": false, 00:23:47.265 "prchk_reftag": false, 00:23:47.265 "psk": "key0", 00:23:47.265 "reconnect_delay_sec": 0, 00:23:47.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:47.265 "traddr": "127.0.0.1", 00:23:47.265 "trsvcid": "4420", 00:23:47.265 "trtype": "TCP" 00:23:47.265 } 00:23:47.265 }, 00:23:47.265 { 00:23:47.265 "method": "bdev_nvme_set_hotplug", 00:23:47.265 "params": { 00:23:47.265 "enable": false, 00:23:47.265 "period_us": 100000 00:23:47.265 } 00:23:47.265 }, 00:23:47.265 { 00:23:47.265 "method": "bdev_wait_for_examine" 00:23:47.265 } 00:23:47.265 ] 00:23:47.265 }, 00:23:47.265 { 00:23:47.265 "subsystem": "nbd", 00:23:47.265 "config": [] 00:23:47.265 } 00:23:47.265 ] 00:23:47.265 }' 00:23:47.265 19:39:36 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:47.265 19:39:36 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:23:47.265 19:39:36 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:47.265 19:39:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:47.265 [2024-07-15 19:39:37.048003] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:23:47.265 [2024-07-15 19:39:37.048109] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100334 ] 00:23:47.619 [2024-07-15 19:39:37.184004] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.619 [2024-07-15 19:39:37.253767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.619 [2024-07-15 19:39:37.399251] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:48.552 19:39:38 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:48.552 19:39:38 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:23:48.552 19:39:38 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:23:48.552 19:39:38 keyring_file -- keyring/file.sh@120 -- # jq length 00:23:48.552 19:39:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:48.552 19:39:38 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:23:48.552 19:39:38 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:23:48.552 19:39:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:48.552 19:39:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:48.552 19:39:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:48.552 19:39:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:48.552 19:39:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:49.118 19:39:38 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:23:49.118 19:39:38 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:23:49.118 19:39:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:49.118 19:39:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:49.118 19:39:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:49.118 19:39:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:49.118 19:39:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:49.376 19:39:38 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:23:49.376 19:39:38 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:23:49.376 19:39:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:23:49.376 19:39:38 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:23:49.633 19:39:39 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:23:49.633 19:39:39 keyring_file -- keyring/file.sh@1 -- # cleanup 00:23:49.633 19:39:39 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.SZIC1tp7tD /tmp/tmp.qOY5spqBOr 00:23:49.633 19:39:39 keyring_file -- keyring/file.sh@20 -- # killprocess 100334 00:23:49.633 19:39:39 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100334 ']' 00:23:49.633 19:39:39 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100334 00:23:49.633 19:39:39 keyring_file -- common/autotest_common.sh@953 -- # uname 00:23:49.633 19:39:39 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.633 19:39:39 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100334 00:23:49.633 killing process with pid 100334 00:23:49.633 Received shutdown signal, test time was about 1.000000 seconds 00:23:49.633 00:23:49.633 Latency(us) 00:23:49.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.633 =================================================================================================================== 00:23:49.633 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:49.633 19:39:39 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:49.633 19:39:39 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:49.633 19:39:39 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100334' 00:23:49.633 19:39:39 keyring_file -- common/autotest_common.sh@967 -- # kill 100334 00:23:49.633 19:39:39 keyring_file -- common/autotest_common.sh@972 -- # wait 100334 00:23:49.890 19:39:39 keyring_file -- keyring/file.sh@21 -- # killprocess 99841 00:23:49.890 19:39:39 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 99841 ']' 00:23:49.890 19:39:39 keyring_file -- common/autotest_common.sh@952 -- # kill -0 99841 00:23:49.890 19:39:39 keyring_file -- common/autotest_common.sh@953 -- # uname 00:23:49.890 19:39:39 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.890 19:39:39 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99841 00:23:49.890 killing process with pid 99841 00:23:49.890 19:39:39 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:49.890 19:39:39 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:49.890 19:39:39 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99841' 00:23:49.890 19:39:39 keyring_file -- common/autotest_common.sh@967 -- # kill 99841 00:23:49.890 [2024-07-15 19:39:39.461932] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:49.890 19:39:39 keyring_file -- common/autotest_common.sh@972 -- # wait 99841 00:23:50.148 00:23:50.148 real 0m16.092s 00:23:50.148 user 0m41.059s 00:23:50.148 sys 0m3.078s 00:23:50.148 19:39:39 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:50.148 ************************************ 00:23:50.148 END TEST keyring_file 00:23:50.148 ************************************ 00:23:50.148 19:39:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:50.148 19:39:39 -- common/autotest_common.sh@1142 -- # return 0 00:23:50.148 19:39:39 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:23:50.148 19:39:39 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:50.148 19:39:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:50.148 19:39:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:50.148 19:39:39 -- common/autotest_common.sh@10 -- # set +x 00:23:50.148 ************************************ 00:23:50.148 START TEST keyring_linux 00:23:50.148 ************************************ 00:23:50.148 19:39:39 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:50.148 * Looking for test storage... 00:23:50.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:50.148 19:39:39 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:50.148 19:39:39 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:679b2b86-338b-4205-a8fd-6b6102ab1055 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=679b2b86-338b-4205-a8fd-6b6102ab1055 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:50.148 19:39:39 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.148 19:39:39 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.148 19:39:39 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.148 19:39:39 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.148 19:39:39 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.148 19:39:39 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.148 19:39:39 keyring_linux -- paths/export.sh@5 -- # export PATH 00:23:50.148 19:39:39 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.148 19:39:39 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.149 19:39:39 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.149 19:39:39 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:50.149 19:39:39 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:50.149 19:39:39 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:50.149 19:39:39 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:23:50.149 19:39:39 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:23:50.149 19:39:39 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:23:50.149 19:39:39 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:23:50.149 19:39:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:50.149 19:39:39 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:23:50.149 19:39:39 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:50.149 19:39:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:50.149 19:39:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:23:50.149 19:39:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:50.149 19:39:39 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:50.149 19:39:39 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:23:50.149 19:39:39 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:50.149 19:39:39 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:50.149 19:39:39 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:23:50.149 19:39:39 keyring_linux -- nvmf/common.sh@705 -- # python - 00:23:50.149 19:39:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:23:50.149 /tmp/:spdk-test:key0 00:23:50.149 19:39:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:23:50.149 19:39:39 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:23:50.149 19:39:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:50.149 19:39:39 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:23:50.149 19:39:39 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:50.149 19:39:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:50.149 19:39:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:23:50.149 19:39:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:50.149 19:39:39 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:50.149 19:39:39 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:23:50.149 19:39:39 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:50.149 19:39:39 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:23:50.149 19:39:39 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:23:50.149 19:39:39 keyring_linux -- nvmf/common.sh@705 -- # python - 00:23:50.407 19:39:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:23:50.407 /tmp/:spdk-test:key1 00:23:50.407 19:39:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:23:50.407 19:39:39 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100487 00:23:50.407 19:39:39 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100487 00:23:50.407 19:39:39 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100487 ']' 00:23:50.407 19:39:39 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:50.407 19:39:39 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.407 19:39:39 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:50.407 19:39:39 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.407 19:39:39 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:50.407 19:39:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:50.407 [2024-07-15 19:39:40.034649] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:23:50.407 [2024-07-15 19:39:40.034744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100487 ] 00:23:50.407 [2024-07-15 19:39:40.169593] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.665 [2024-07-15 19:39:40.238007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.665 19:39:40 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:50.665 19:39:40 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:23:50.665 19:39:40 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:23:50.665 19:39:40 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.665 19:39:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:50.665 [2024-07-15 19:39:40.422775] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.665 null0 00:23:50.665 [2024-07-15 19:39:40.454695] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:50.665 [2024-07-15 19:39:40.454937] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:50.923 19:39:40 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.923 19:39:40 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:23:50.923 65152785 00:23:50.923 19:39:40 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:23:50.923 1020755371 00:23:50.923 19:39:40 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100511 00:23:50.923 19:39:40 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100511 /var/tmp/bperf.sock 00:23:50.923 19:39:40 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100511 ']' 00:23:50.923 19:39:40 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:50.923 19:39:40 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:23:50.923 19:39:40 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:50.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:50.923 19:39:40 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:50.923 19:39:40 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:50.923 19:39:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:50.923 [2024-07-15 19:39:40.539417] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:23:50.923 [2024-07-15 19:39:40.539515] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100511 ] 00:23:50.923 [2024-07-15 19:39:40.673089] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.181 [2024-07-15 19:39:40.753844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.181 19:39:40 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:51.181 19:39:40 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:23:51.181 19:39:40 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:23:51.181 19:39:40 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:23:51.438 19:39:41 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:23:51.438 19:39:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:51.695 19:39:41 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:51.695 19:39:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:51.952 [2024-07-15 19:39:41.688131] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:52.208 nvme0n1 00:23:52.208 19:39:41 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:23:52.208 19:39:41 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:23:52.208 19:39:41 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:52.208 19:39:41 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:52.208 19:39:41 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:52.208 19:39:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:52.464 19:39:42 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:23:52.464 19:39:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:52.464 19:39:42 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:23:52.464 19:39:42 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:23:52.464 19:39:42 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:52.464 19:39:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:52.464 19:39:42 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:23:52.722 19:39:42 keyring_linux -- keyring/linux.sh@25 -- # sn=65152785 00:23:52.722 19:39:42 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:23:52.722 19:39:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:52.722 19:39:42 keyring_linux -- keyring/linux.sh@26 -- # [[ 65152785 == \6\5\1\5\2\7\8\5 ]] 00:23:52.722 19:39:42 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 65152785 00:23:52.722 19:39:42 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:23:52.722 19:39:42 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:52.722 Running I/O for 1 seconds... 00:23:53.652 00:23:53.652 Latency(us) 00:23:53.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.652 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:53.652 nvme0n1 : 1.02 9813.43 38.33 0.00 0.00 12906.97 4855.62 26333.56 00:23:53.652 =================================================================================================================== 00:23:53.652 Total : 9813.43 38.33 0.00 0.00 12906.97 4855.62 26333.56 00:23:53.652 0 00:23:53.942 19:39:43 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:53.942 19:39:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:54.218 19:39:43 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:23:54.218 19:39:43 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:23:54.218 19:39:43 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:54.218 19:39:43 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:54.218 19:39:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:54.218 19:39:43 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:54.474 19:39:44 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:23:54.474 19:39:44 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:54.474 19:39:44 keyring_linux -- keyring/linux.sh@23 -- # return 00:23:54.474 19:39:44 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:54.474 19:39:44 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:23:54.474 19:39:44 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:54.474 19:39:44 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:23:54.474 19:39:44 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:54.474 19:39:44 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:23:54.474 19:39:44 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:54.474 19:39:44 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:54.474 19:39:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:54.732 [2024-07-15 19:39:44.312241] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:54.732 [2024-07-15 19:39:44.312884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f64020 (107): Transport endpoint is not connected 00:23:54.732 [2024-07-15 19:39:44.313871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f64020 (9): Bad file descriptor 00:23:54.732 [2024-07-15 19:39:44.314867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.732 [2024-07-15 19:39:44.314891] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:54.732 [2024-07-15 19:39:44.314901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.732 2024/07/15 19:39:44 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:54.732 request: 00:23:54.732 { 00:23:54.732 "method": "bdev_nvme_attach_controller", 00:23:54.732 "params": { 00:23:54.732 "name": "nvme0", 00:23:54.732 "trtype": "tcp", 00:23:54.732 "traddr": "127.0.0.1", 00:23:54.732 "adrfam": "ipv4", 00:23:54.732 "trsvcid": "4420", 00:23:54.732 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:54.732 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:54.732 "prchk_reftag": false, 00:23:54.732 "prchk_guard": false, 00:23:54.732 "hdgst": false, 00:23:54.732 "ddgst": false, 00:23:54.732 "psk": ":spdk-test:key1" 00:23:54.732 } 00:23:54.732 } 00:23:54.732 Got JSON-RPC error response 00:23:54.732 GoRPCClient: error on JSON-RPC call 00:23:54.732 19:39:44 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:23:54.732 19:39:44 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:54.732 19:39:44 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:54.732 19:39:44 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:54.732 19:39:44 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:23:54.732 19:39:44 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:54.732 19:39:44 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:23:54.732 19:39:44 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:23:54.732 19:39:44 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:23:54.732 19:39:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:54.732 19:39:44 keyring_linux -- keyring/linux.sh@33 -- # sn=65152785 00:23:54.732 19:39:44 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 65152785 00:23:54.732 1 links removed 00:23:54.732 19:39:44 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:54.732 19:39:44 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:23:54.732 19:39:44 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:23:54.732 19:39:44 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:23:54.732 19:39:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:23:54.732 19:39:44 keyring_linux -- keyring/linux.sh@33 -- # sn=1020755371 00:23:54.732 19:39:44 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1020755371 00:23:54.732 1 links removed 00:23:54.732 19:39:44 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100511 00:23:54.732 19:39:44 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100511 ']' 00:23:54.732 19:39:44 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100511 00:23:54.732 19:39:44 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:23:54.732 19:39:44 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:54.732 19:39:44 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100511 00:23:54.732 19:39:44 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:54.732 19:39:44 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:54.732 19:39:44 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100511' 00:23:54.732 killing process with pid 100511 00:23:54.732 19:39:44 keyring_linux -- common/autotest_common.sh@967 -- # kill 100511 00:23:54.732 Received shutdown signal, test time was about 1.000000 seconds 00:23:54.732 00:23:54.732 Latency(us) 00:23:54.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.732 =================================================================================================================== 00:23:54.732 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.732 19:39:44 keyring_linux -- common/autotest_common.sh@972 -- # wait 100511 00:23:54.732 19:39:44 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100487 00:23:54.732 19:39:44 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100487 ']' 00:23:54.732 19:39:44 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100487 00:23:54.732 19:39:44 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:23:54.990 19:39:44 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:54.990 19:39:44 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100487 00:23:54.990 19:39:44 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:54.990 19:39:44 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:54.991 killing process with pid 100487 00:23:54.991 19:39:44 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100487' 00:23:54.991 19:39:44 keyring_linux -- common/autotest_common.sh@967 -- # kill 100487 00:23:54.991 19:39:44 keyring_linux -- common/autotest_common.sh@972 -- # wait 100487 00:23:55.248 00:23:55.248 real 0m5.041s 00:23:55.248 user 0m10.428s 00:23:55.248 sys 0m1.368s 00:23:55.248 19:39:44 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:55.248 ************************************ 00:23:55.248 END TEST keyring_linux 00:23:55.248 19:39:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:55.248 ************************************ 00:23:55.248 19:39:44 -- common/autotest_common.sh@1142 -- # return 0 00:23:55.248 19:39:44 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:23:55.248 19:39:44 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:23:55.248 19:39:44 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:23:55.248 19:39:44 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:23:55.248 19:39:44 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:23:55.248 19:39:44 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:23:55.248 19:39:44 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:23:55.248 19:39:44 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:23:55.248 19:39:44 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:23:55.249 19:39:44 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:23:55.249 19:39:44 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:23:55.249 19:39:44 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:23:55.249 19:39:44 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:23:55.249 19:39:44 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:23:55.249 19:39:44 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:23:55.249 19:39:44 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:23:55.249 19:39:44 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:23:55.249 19:39:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:55.249 19:39:44 -- common/autotest_common.sh@10 -- # set +x 00:23:55.249 19:39:44 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:23:55.249 19:39:44 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:23:55.249 19:39:44 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:23:55.249 19:39:44 -- common/autotest_common.sh@10 -- # set +x 00:23:56.620 INFO: APP EXITING 00:23:56.620 INFO: killing all VMs 00:23:56.620 INFO: killing vhost app 00:23:56.620 INFO: EXIT DONE 00:23:57.187 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:57.187 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:23:57.187 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:23:57.753 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:57.753 Cleaning 00:23:57.753 Removing: /var/run/dpdk/spdk0/config 00:23:57.753 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:57.753 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:57.753 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:57.753 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:57.753 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:57.753 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:57.753 Removing: /var/run/dpdk/spdk1/config 00:23:57.753 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:57.753 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:57.753 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:57.753 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:57.753 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:57.753 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:57.753 Removing: /var/run/dpdk/spdk2/config 00:23:57.753 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:57.753 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:57.753 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:57.753 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:57.753 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:57.753 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:57.753 Removing: /var/run/dpdk/spdk3/config 00:23:57.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:57.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:57.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:57.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:57.753 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:57.753 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:57.753 Removing: /var/run/dpdk/spdk4/config 00:23:57.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:57.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:57.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:57.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:57.753 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:57.753 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:57.753 Removing: /dev/shm/nvmf_trace.0 00:23:57.753 Removing: /dev/shm/spdk_tgt_trace.pid60648 00:23:57.753 Removing: /var/run/dpdk/spdk0 00:23:57.753 Removing: /var/run/dpdk/spdk1 00:23:57.753 Removing: /var/run/dpdk/spdk2 00:23:57.753 Removing: /var/run/dpdk/spdk3 00:23:57.753 Removing: /var/run/dpdk/spdk4 00:23:57.753 Removing: /var/run/dpdk/spdk_pid100334 00:23:57.753 Removing: /var/run/dpdk/spdk_pid100487 00:23:57.753 Removing: /var/run/dpdk/spdk_pid100511 00:23:57.753 Removing: /var/run/dpdk/spdk_pid60509 00:23:57.753 Removing: /var/run/dpdk/spdk_pid60648 00:23:57.753 Removing: /var/run/dpdk/spdk_pid60896 00:23:57.753 Removing: /var/run/dpdk/spdk_pid60983 00:23:57.753 Removing: /var/run/dpdk/spdk_pid61027 00:23:57.753 Removing: /var/run/dpdk/spdk_pid61132 00:23:57.753 Removing: /var/run/dpdk/spdk_pid61162 00:23:57.753 Removing: /var/run/dpdk/spdk_pid61280 00:23:57.753 Removing: /var/run/dpdk/spdk_pid61555 00:23:58.011 Removing: /var/run/dpdk/spdk_pid61725 00:23:58.011 Removing: /var/run/dpdk/spdk_pid61807 00:23:58.011 Removing: /var/run/dpdk/spdk_pid61894 00:23:58.011 Removing: /var/run/dpdk/spdk_pid61983 00:23:58.011 Removing: /var/run/dpdk/spdk_pid62022 00:23:58.011 Removing: /var/run/dpdk/spdk_pid62057 00:23:58.011 Removing: /var/run/dpdk/spdk_pid62113 00:23:58.011 Removing: /var/run/dpdk/spdk_pid62223 00:23:58.011 Removing: /var/run/dpdk/spdk_pid62854 00:23:58.011 Removing: /var/run/dpdk/spdk_pid62918 00:23:58.011 Removing: /var/run/dpdk/spdk_pid62976 00:23:58.011 Removing: /var/run/dpdk/spdk_pid62996 00:23:58.011 Removing: /var/run/dpdk/spdk_pid63075 00:23:58.011 Removing: /var/run/dpdk/spdk_pid63103 00:23:58.011 Removing: /var/run/dpdk/spdk_pid63171 00:23:58.011 Removing: /var/run/dpdk/spdk_pid63199 00:23:58.011 Removing: /var/run/dpdk/spdk_pid63256 00:23:58.012 Removing: /var/run/dpdk/spdk_pid63286 00:23:58.012 Removing: /var/run/dpdk/spdk_pid63332 00:23:58.012 Removing: /var/run/dpdk/spdk_pid63363 00:23:58.012 Removing: /var/run/dpdk/spdk_pid63509 00:23:58.012 Removing: /var/run/dpdk/spdk_pid63544 00:23:58.012 Removing: /var/run/dpdk/spdk_pid63614 00:23:58.012 Removing: /var/run/dpdk/spdk_pid63683 00:23:58.012 Removing: /var/run/dpdk/spdk_pid63707 00:23:58.012 Removing: /var/run/dpdk/spdk_pid63766 00:23:58.012 Removing: /var/run/dpdk/spdk_pid63800 00:23:58.012 Removing: /var/run/dpdk/spdk_pid63835 00:23:58.012 Removing: /var/run/dpdk/spdk_pid63869 00:23:58.012 Removing: /var/run/dpdk/spdk_pid63904 00:23:58.012 Removing: /var/run/dpdk/spdk_pid63933 00:23:58.012 Removing: /var/run/dpdk/spdk_pid63967 00:23:58.012 Removing: /var/run/dpdk/spdk_pid64004 00:23:58.012 Removing: /var/run/dpdk/spdk_pid64033 00:23:58.012 Removing: /var/run/dpdk/spdk_pid64073 00:23:58.012 Removing: /var/run/dpdk/spdk_pid64102 00:23:58.012 Removing: /var/run/dpdk/spdk_pid64131 00:23:58.012 Removing: /var/run/dpdk/spdk_pid64171 00:23:58.012 Removing: /var/run/dpdk/spdk_pid64200 00:23:58.012 Removing: /var/run/dpdk/spdk_pid64242 00:23:58.012 Removing: /var/run/dpdk/spdk_pid64271 00:23:58.012 Removing: /var/run/dpdk/spdk_pid64306 00:23:58.012 Removing: /var/run/dpdk/spdk_pid64343 00:23:58.012 Removing: /var/run/dpdk/spdk_pid64375 00:23:58.012 Removing: /var/run/dpdk/spdk_pid64410 00:23:58.012 Removing: /var/run/dpdk/spdk_pid64446 00:23:58.012 Removing: /var/run/dpdk/spdk_pid64511 00:23:58.012 Removing: /var/run/dpdk/spdk_pid64603 00:23:58.012 Removing: /var/run/dpdk/spdk_pid64997 00:23:58.012 Removing: /var/run/dpdk/spdk_pid68302 00:23:58.012 Removing: /var/run/dpdk/spdk_pid68644 00:23:58.012 Removing: /var/run/dpdk/spdk_pid71087 00:23:58.012 Removing: /var/run/dpdk/spdk_pid71471 00:23:58.012 Removing: /var/run/dpdk/spdk_pid71724 00:23:58.012 Removing: /var/run/dpdk/spdk_pid71769 00:23:58.012 Removing: /var/run/dpdk/spdk_pid72392 00:23:58.012 Removing: /var/run/dpdk/spdk_pid72829 00:23:58.012 Removing: /var/run/dpdk/spdk_pid72878 00:23:58.012 Removing: /var/run/dpdk/spdk_pid73238 00:23:58.012 Removing: /var/run/dpdk/spdk_pid73766 00:23:58.012 Removing: /var/run/dpdk/spdk_pid74214 00:23:58.012 Removing: /var/run/dpdk/spdk_pid75144 00:23:58.012 Removing: /var/run/dpdk/spdk_pid76091 00:23:58.012 Removing: /var/run/dpdk/spdk_pid76214 00:23:58.012 Removing: /var/run/dpdk/spdk_pid76277 00:23:58.012 Removing: /var/run/dpdk/spdk_pid77716 00:23:58.012 Removing: /var/run/dpdk/spdk_pid77931 00:23:58.012 Removing: /var/run/dpdk/spdk_pid83336 00:23:58.012 Removing: /var/run/dpdk/spdk_pid83781 00:23:58.012 Removing: /var/run/dpdk/spdk_pid83887 00:23:58.012 Removing: /var/run/dpdk/spdk_pid84020 00:23:58.012 Removing: /var/run/dpdk/spdk_pid84052 00:23:58.012 Removing: /var/run/dpdk/spdk_pid84102 00:23:58.012 Removing: /var/run/dpdk/spdk_pid84130 00:23:58.012 Removing: /var/run/dpdk/spdk_pid84280 00:23:58.012 Removing: /var/run/dpdk/spdk_pid84433 00:23:58.012 Removing: /var/run/dpdk/spdk_pid84689 00:23:58.012 Removing: /var/run/dpdk/spdk_pid84805 00:23:58.012 Removing: /var/run/dpdk/spdk_pid85036 00:23:58.012 Removing: /var/run/dpdk/spdk_pid85149 00:23:58.012 Removing: /var/run/dpdk/spdk_pid85257 00:23:58.012 Removing: /var/run/dpdk/spdk_pid85606 00:23:58.012 Removing: /var/run/dpdk/spdk_pid85992 00:23:58.012 Removing: /var/run/dpdk/spdk_pid86288 00:23:58.012 Removing: /var/run/dpdk/spdk_pid86779 00:23:58.012 Removing: /var/run/dpdk/spdk_pid86781 00:23:58.012 Removing: /var/run/dpdk/spdk_pid87126 00:23:58.012 Removing: /var/run/dpdk/spdk_pid87140 00:23:58.012 Removing: /var/run/dpdk/spdk_pid87164 00:23:58.012 Removing: /var/run/dpdk/spdk_pid87191 00:23:58.012 Removing: /var/run/dpdk/spdk_pid87202 00:23:58.012 Removing: /var/run/dpdk/spdk_pid87533 00:23:58.012 Removing: /var/run/dpdk/spdk_pid87586 00:23:58.012 Removing: /var/run/dpdk/spdk_pid87908 00:23:58.012 Removing: /var/run/dpdk/spdk_pid88160 00:23:58.012 Removing: /var/run/dpdk/spdk_pid88636 00:23:58.012 Removing: /var/run/dpdk/spdk_pid89212 00:23:58.012 Removing: /var/run/dpdk/spdk_pid90562 00:23:58.012 Removing: /var/run/dpdk/spdk_pid91138 00:23:58.012 Removing: /var/run/dpdk/spdk_pid91140 00:23:58.012 Removing: /var/run/dpdk/spdk_pid93064 00:23:58.012 Removing: /var/run/dpdk/spdk_pid93153 00:23:58.012 Removing: /var/run/dpdk/spdk_pid93245 00:23:58.012 Removing: /var/run/dpdk/spdk_pid93334 00:23:58.012 Removing: /var/run/dpdk/spdk_pid93461 00:23:58.012 Removing: /var/run/dpdk/spdk_pid93538 00:23:58.271 Removing: /var/run/dpdk/spdk_pid93610 00:23:58.271 Removing: /var/run/dpdk/spdk_pid93687 00:23:58.271 Removing: /var/run/dpdk/spdk_pid94003 00:23:58.271 Removing: /var/run/dpdk/spdk_pid94688 00:23:58.271 Removing: /var/run/dpdk/spdk_pid96032 00:23:58.271 Removing: /var/run/dpdk/spdk_pid96234 00:23:58.271 Removing: /var/run/dpdk/spdk_pid96521 00:23:58.271 Removing: /var/run/dpdk/spdk_pid96816 00:23:58.271 Removing: /var/run/dpdk/spdk_pid97356 00:23:58.271 Removing: /var/run/dpdk/spdk_pid97361 00:23:58.271 Removing: /var/run/dpdk/spdk_pid97720 00:23:58.271 Removing: /var/run/dpdk/spdk_pid97879 00:23:58.271 Removing: /var/run/dpdk/spdk_pid98025 00:23:58.271 Removing: /var/run/dpdk/spdk_pid98122 00:23:58.271 Removing: /var/run/dpdk/spdk_pid98272 00:23:58.271 Removing: /var/run/dpdk/spdk_pid98381 00:23:58.271 Removing: /var/run/dpdk/spdk_pid99029 00:23:58.271 Removing: /var/run/dpdk/spdk_pid99070 00:23:58.271 Removing: /var/run/dpdk/spdk_pid99104 00:23:58.271 Removing: /var/run/dpdk/spdk_pid99353 00:23:58.271 Removing: /var/run/dpdk/spdk_pid99388 00:23:58.271 Removing: /var/run/dpdk/spdk_pid99418 00:23:58.271 Removing: /var/run/dpdk/spdk_pid99841 00:23:58.271 Removing: /var/run/dpdk/spdk_pid99876 00:23:58.271 Clean 00:23:58.271 19:39:47 -- common/autotest_common.sh@1451 -- # return 0 00:23:58.271 19:39:47 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:23:58.271 19:39:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:58.271 19:39:47 -- common/autotest_common.sh@10 -- # set +x 00:23:58.271 19:39:47 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:23:58.271 19:39:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:58.271 19:39:47 -- common/autotest_common.sh@10 -- # set +x 00:23:58.271 19:39:47 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:58.271 19:39:47 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:58.271 19:39:47 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:58.271 19:39:48 -- spdk/autotest.sh@391 -- # hash lcov 00:23:58.271 19:39:48 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:23:58.271 19:39:48 -- spdk/autotest.sh@393 -- # hostname 00:23:58.271 19:39:48 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:58.529 geninfo: WARNING: invalid characters removed from testname! 00:24:30.637 19:40:17 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:32.536 19:40:21 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:35.816 19:40:25 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:39.152 19:40:28 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:42.428 19:40:31 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:45.051 19:40:34 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:48.360 19:40:37 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:48.360 19:40:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:48.360 19:40:37 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:24:48.360 19:40:37 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.360 19:40:37 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.360 19:40:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.360 19:40:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.360 19:40:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.360 19:40:37 -- paths/export.sh@5 -- $ export PATH 00:24:48.361 19:40:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.361 19:40:37 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:24:48.361 19:40:37 -- common/autobuild_common.sh@444 -- $ date +%s 00:24:48.361 19:40:37 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721072437.XXXXXX 00:24:48.361 19:40:37 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721072437.2vL0I4 00:24:48.361 19:40:37 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:24:48.361 19:40:37 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:24:48.361 19:40:37 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:24:48.361 19:40:37 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:24:48.361 19:40:37 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:24:48.361 19:40:37 -- common/autobuild_common.sh@460 -- $ get_config_params 00:24:48.361 19:40:37 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:24:48.361 19:40:37 -- common/autotest_common.sh@10 -- $ set +x 00:24:48.361 19:40:37 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:24:48.361 19:40:37 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:24:48.361 19:40:37 -- pm/common@17 -- $ local monitor 00:24:48.361 19:40:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:48.361 19:40:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:48.361 19:40:37 -- pm/common@25 -- $ sleep 1 00:24:48.361 19:40:37 -- pm/common@21 -- $ date +%s 00:24:48.361 19:40:37 -- pm/common@21 -- $ date +%s 00:24:48.361 19:40:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721072437 00:24:48.361 19:40:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721072437 00:24:48.361 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721072437_collect-vmstat.pm.log 00:24:48.361 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721072437_collect-cpu-load.pm.log 00:24:48.935 19:40:38 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:24:48.935 19:40:38 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:24:48.935 19:40:38 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:24:48.936 19:40:38 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:24:48.936 19:40:38 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:24:48.936 19:40:38 -- spdk/autopackage.sh@19 -- $ timing_finish 00:24:48.936 19:40:38 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:48.936 19:40:38 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:24:48.936 19:40:38 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:48.936 19:40:38 -- spdk/autopackage.sh@20 -- $ exit 0 00:24:48.936 19:40:38 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:24:48.936 19:40:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:24:48.936 19:40:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:24:48.936 19:40:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:48.936 19:40:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:24:48.936 19:40:38 -- pm/common@44 -- $ pid=102205 00:24:48.936 19:40:38 -- pm/common@50 -- $ kill -TERM 102205 00:24:48.936 19:40:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:48.936 19:40:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:24:48.936 19:40:38 -- pm/common@44 -- $ pid=102206 00:24:48.936 19:40:38 -- pm/common@50 -- $ kill -TERM 102206 00:24:48.936 + [[ -n 5166 ]] 00:24:48.936 + sudo kill 5166 00:24:49.878 [Pipeline] } 00:24:49.898 [Pipeline] // timeout 00:24:49.905 [Pipeline] } 00:24:49.925 [Pipeline] // stage 00:24:49.931 [Pipeline] } 00:24:49.948 [Pipeline] // catchError 00:24:49.958 [Pipeline] stage 00:24:49.961 [Pipeline] { (Stop VM) 00:24:49.978 [Pipeline] sh 00:24:50.256 + vagrant halt 00:24:54.450 ==> default: Halting domain... 00:25:01.012 [Pipeline] sh 00:25:01.289 + vagrant destroy -f 00:25:06.547 ==> default: Removing domain... 00:25:06.566 [Pipeline] sh 00:25:06.916 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:25:06.926 [Pipeline] } 00:25:06.943 [Pipeline] // stage 00:25:06.950 [Pipeline] } 00:25:06.973 [Pipeline] // dir 00:25:06.978 [Pipeline] } 00:25:07.001 [Pipeline] // wrap 00:25:07.008 [Pipeline] } 00:25:07.028 [Pipeline] // catchError 00:25:07.040 [Pipeline] stage 00:25:07.042 [Pipeline] { (Epilogue) 00:25:07.061 [Pipeline] sh 00:25:07.338 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:15.450 [Pipeline] catchError 00:25:15.452 [Pipeline] { 00:25:15.467 [Pipeline] sh 00:25:15.746 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:16.005 Artifacts sizes are good 00:25:16.014 [Pipeline] } 00:25:16.033 [Pipeline] // catchError 00:25:16.046 [Pipeline] archiveArtifacts 00:25:16.054 Archiving artifacts 00:25:16.253 [Pipeline] cleanWs 00:25:16.264 [WS-CLEANUP] Deleting project workspace... 00:25:16.264 [WS-CLEANUP] Deferred wipeout is used... 00:25:16.271 [WS-CLEANUP] done 00:25:16.273 [Pipeline] } 00:25:16.291 [Pipeline] // stage 00:25:16.298 [Pipeline] } 00:25:16.316 [Pipeline] // node 00:25:16.324 [Pipeline] End of Pipeline 00:25:16.362 Finished: SUCCESS